1
|
Li Z, An K, Yu H, Luo F, Pan J, Wang S, Zhang J, Wu W, Chang D. Spectrum learning for super-resolution tomographic reconstruction. Phys Med Biol 2024; 69:085018. [PMID: 38373346 DOI: 10.1088/1361-6560/ad2a94] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Accepted: 02/19/2024] [Indexed: 02/21/2024]
Abstract
Objective. Computed Tomography (CT) has been widely used in industrial high-resolution non-destructive testing. However, it is difficult to obtain high-resolution images for large-scale objects due to their physical limitations. The objective is to develop an improved super-resolution technique that preserves small structures and details while efficiently capturing high-frequency information.Approach. The study proposes a new deep learning based method called spectrum learning (SPEAR) network for CT images super-resolution. This approach leverages both global information in the image domain and high-frequency information in the frequency domain. The SPEAR network is designed to reconstruct high-resolution images from low-resolution inputs by considering not only the main body of the images but also the small structures and other details. The symmetric property of the spectrum is exploited to reduce weight parameters in the frequency domain. Additionally, a spectrum loss is introduced to enforce the preservation of both high-frequency components and global information.Main results. The network is trained using pairs of low-resolution and high-resolution CT images, and it is fine-tuned using additional low-dose and normal-dose CT image pairs. The experimental results demonstrate that the proposed SPEAR network outperforms state-of-the-art networks in terms of image reconstruction quality. The approach successfully preserves high-frequency information and small structures, leading to better results compared to existing methods. The network's ability to generate high-resolution images from low-resolution inputs, even in cases of low-dose CT images, showcases its effectiveness in maintaining image quality.Significance. The proposed SPEAR network's ability to simultaneously capture global information and high-frequency details addresses the limitations of existing methods, resulting in more accurate and informative image reconstructions. This advancement can have substantial implications for various industries and medical diagnoses relying on accurate imaging.
Collapse
Affiliation(s)
- Zirong Li
- The School of Biomedical Engineering, Shenzhen Campus of Sun Yat-sen University, Guangdong, People's Republic of China
| | - Kang An
- The Key Laboratory of Optoelectronic Technology and Systems, ICT Research Center, Ministry of Education, Chongqing University, Chongqing, People's Republic of China
| | - Hengyong Yu
- The Department of Electrical and Computer Engineering, University of Massachusetts Lowell, Lowell, MA, United States of America
| | - Fulin Luo
- The College of Computer Science, Chongqing University, Chongqing, People's Republic of China
| | - Jiayi Pan
- The School of Biomedical Engineering, Shenzhen Campus of Sun Yat-sen University, Guangdong, People's Republic of China
| | - Shaoyu Wang
- The Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, Zhengzhou, People's Republic of China
| | - Jianjia Zhang
- The School of Biomedical Engineering, Shenzhen Campus of Sun Yat-sen University, Guangdong, People's Republic of China
| | - Weiwen Wu
- The School of Biomedical Engineering, Shenzhen Campus of Sun Yat-sen University, Guangdong, People's Republic of China
- The Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, Zhengzhou, People's Republic of China
| | - Dingyue Chang
- The China Academy of Engineering Physics, Institute of Materials, Mianyang 621700, People's Republic of China
| |
Collapse
|
2
|
Zhang Y. An unsupervised 2D-3D deformable registration network (2D3D-RegNet) for cone-beam CT estimation. Phys Med Biol 2021; 66. [PMID: 33631734 DOI: 10.1088/1361-6560/abe9f6] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Accepted: 02/25/2021] [Indexed: 12/25/2022]
Abstract
Acquiring CBCTs from a limited scan angle can help to reduce the imaging time, save the imaging dose, and allow continuous target localizations through arc-based treatments with high temporal resolution. However, insufficient scan angle sampling leads to severe distortions and artifacts in the reconstructed CBCT images, limiting their clinical applicability. 2D-3D deformable registration can map a prior fully-sampled CT/CBCT volume to estimate a new CBCT, based on limited-angle on-board cone-beam projections. The resulting CBCT images estimated by 2D-3D deformable registration can successfully suppress the distortions and artifacts, and reflect up-to-date patient anatomy. However, traditional iterative 2D-3D deformable registration algorithm is very computationally expensive and time-consuming, which takes hours to generate a high quality deformation vector field (DVF) and the CBCT. In this work, we developed an unsupervised, end-to-end, 2D-3D deformable registration framework using convolutional neural networks (2D3D-RegNet) to address the speed bottleneck of the conventional iterative 2D-3D deformable registration algorithm. The 2D3D-RegNet was able to solve the DVFs within 5 seconds for 90 orthogonally-arranged projections covering a combined 90° scan angle, with DVF accuracy superior to 3D-3D deformable registration, and on par with the conventional 2D-3D deformable registration algorithm. We also performed a preliminary robustness analysis of 2D3D-RegNet towards projection angular sampling frequency variations, as well as scan angle offsets. The synergy of 2D3D-RegNet with biomechanical modeling was also evaluated, and demonstrated that 2D3D-RegNet can function as a fast DVF solution core for further DVF refinement.
Collapse
Affiliation(s)
- You Zhang
- Advanced Imaging and Informatics for Radiation Therapy (AIRT) Laboratory, Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75235, United States of America
| |
Collapse
|
3
|
Palaniappan P, Meyer S, Kamp F, Belka C, Riboldi M, Parodi K, Gianoli C. Deformable image registration of the treatment planning CT with proton radiographies in perspective of adaptive proton therapy. Phys Med Biol 2021; 66:045008. [PMID: 32365335 DOI: 10.1088/1361-6560/ab8fc3] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
The purpose of this work is to investigate the potentiality of using a limited number of in-room proton radiographies to compensate anatomical changes in adaptive proton therapy. The treatment planning CT is adapted to the treatment delivery scenario relying on 2D-3D deformable image registration (DIR). The proton radiographies, expressed in water equivalent thickness (WET) are simulated for both list-mode and integration-mode detector configurations in pencil beam scanning. Geometrical and analytical simulations of an anthropomorphic phantom in the presence of anatomical changes due to breathing are adopted. A Monte Carlo simulation of proton radiographies based on a clinical CT image in the presence of artificial anatomical changes is also considered. The accuracy of the 2D-3D DIR, calculated as root mean square error, strongly depends on the considered anatomical changes and is considered adequate for promising adaptive proton therapy when comparable to the accuracy of conventional 3D-3D DIR. In geometrical simulation, this is achieved with a minimum of eight/nine radiographies (more than 90% accuracy). Negligible improvement (sim1%) is obtained with the use of 180 radiographies. Comparing different detector configurations, superior accuracy is obtained with list-mode than integration-mode max (WET with maximum occurrence) and mean (average WET weighted by occurrences). Moreover, integration-mode max performs better than integration-mode mean. Results are minimally affected by proton statistics. In analytical simulation, the anatomical changes are approximately compensated (about 60%-70% accuracy) with two proton radiographies and minor improvement is observed with nine proton radiographies. In clinical data, two proton radiographies from list-mode have demonstrated better performance than nine from integration-mode (more than 100% and about 50%-70% accuracy, respectively), even avoiding the finer grid spacing of the last numerical optimization stage. In conclusion, the choice of detector configuration as well as the amount and complexity of the considered anatomical changes determine the minimum number of radiographies to be used.
Collapse
Affiliation(s)
- Prasannakumar Palaniappan
- Department of Medical Physics - Experimental Physics, Ludwig-Maximilians-Universität München, Munich, Germany. Author to whom any correspondence should be addressed
| | | | | | | | | | | | | |
Collapse
|
4
|
Wang Y, Zhong Z, Hua J. DeepOrganNet: On-the-Fly Reconstruction and Visualization of 3D / 4D Lung Models from Single-View Projections by Deep Deformation Network. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:960-970. [PMID: 31442979 DOI: 10.1109/tvcg.2019.2934369] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
This paper introduces a deep neural network based method, i.e., DeepOrganNet, to generate and visualize fully high-fidelity 3D / 4D organ geometric models from single-view medical images with complicated background in real time. Traditional 3D / 4D medical image reconstruction requires near hundreds of projections, which cost insufferable computational time and deliver undesirable high imaging / radiation dose to human subjects. Moreover, it always needs further notorious processes to segment or extract the accurate 3D organ models subsequently. The computational time and imaging dose can be reduced by decreasing the number of projections, but the reconstructed image quality is degraded accordingly. To our knowledge, there is no method directly and explicitly reconstructing multiple 3D organ meshes from a single 2D medical grayscale image on the fly. Given single-view 2D medical images, e.g., 3D / 4D-CT projections or X-ray images, our end-to-end DeepOrganNet framework can efficiently and effectively reconstruct 3D / 4D lung models with a variety of geometric shapes by learning the smooth deformation fields from multiple templates based on a trivariate tensor-product deformation technique, leveraging an informative latent descriptor extracted from input 2D images. The proposed method can guarantee to generate high-quality and high-fidelity manifold meshes for 3D / 4D lung models; while, all current deep learning based approaches on the shape reconstruction from a single image cannot. The major contributions of this work are to accurately reconstruct the 3D organ shapes from 2D single-view projection, significantly improve the procedure time to allow on-the-fly visualization, and dramatically reduce the imaging dose for human subjects. Experimental results are evaluated and compared with the traditional reconstruction method and the state-of-the-art in deep learning, by using extensive 3D and 4D examples, including both synthetic phantom and real patient datasets. The efficiency of the proposed method shows that it only needs several milliseconds to generate organ meshes with 10K vertices, which has great potential to be used in real-time image guided radiation therapy (IGRT).
Collapse
|
5
|
Niebler S, Schömer E, Tjaden H, Schwanecke U, Schulze R. Projection‐based improvement of 3D reconstructions from motion‐impaired dental cone beam CT data. Med Phys 2019; 46:4470-4480. [DOI: 10.1002/mp.13731] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2019] [Revised: 07/02/2019] [Accepted: 07/09/2019] [Indexed: 11/07/2022] Open
Affiliation(s)
- Stefan Niebler
- Institute of Computer Science Johannes Gutenberg University 55099Mainz Germany
| | - Elmar Schömer
- Institute of Computer Science Johannes Gutenberg University 55099Mainz Germany
| | - Henning Tjaden
- Computer Vision & Mixed Reality Group RheinMain University of Applied Sciences 65195Wiesbaden Rüsselsheim Germany
| | - Ulrich Schwanecke
- Computer Vision & Mixed Reality Group RheinMain University of Applied Sciences 65195Wiesbaden Rüsselsheim Germany
| | - Ralf Schulze
- Department of Oral and Maxillofacial Surgery University Medical Center of the Johannes Gutenberg University 55131Mainz Germany
| |
Collapse
|
6
|
Mao W, Liu C, Gardner SJ, Siddiqui F, Snyder KC, Kumarasiri A, Zhao B, Kim J, Wen NW, Movsas B, Chetty IJ. Evaluation and Clinical Application of a Commercially Available Iterative Reconstruction Algorithm for CBCT-Based IGRT. Technol Cancer Res Treat 2019; 18:1533033818823054. [PMID: 30803367 PMCID: PMC6373994 DOI: 10.1177/1533033818823054] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2018] [Revised: 09/26/2018] [Accepted: 11/29/2018] [Indexed: 11/27/2022] Open
Abstract
PURPOSE We have quantitatively evaluated the image quality of a new commercially available iterative cone-beam computed tomography reconstruction algorithm over standard cone-beam computed tomography image reconstruction results. METHODS This iterative cone-beam computed tomography reconstruction pipeline uses a finite element solver (AcurosCTS)-based scatter correction and a statistical (iterative) reconstruction in addition to a standard kernel-based correction followed by filtered back-projection-based Feldkamp-Davis-Kress cone-beam computed tomography reconstruction. Standard full-fan half-rotation Head, half-fan full-rotation Head, and standard Pelvis cone-beam computed tomography protocols have been investigated to scan a quality assurance phantom via the following image quality metrics: uniformity, HU constancy, spatial resolution, low contrast detection, noise level, and contrast-to-noise ratio. An anthropomorphic head phantom was scanned for verification of noise reduction. Clinical patient image data sets for 5 head/neck patients and 5 prostate patients were qualitatively evaluated. RESULTS Quality assurance phantom study results showed that relative to filtered back-projection-based cone-beam computed tomography, noise was reduced from 28.8 ± 0.3 HU to a range between 18.3 ± 0.2 and 5.9 ± 0.2 HU for Full-Fan Head scans, from 14.4 ± 0.2 HU to a range between 12.8 ± 0.3 and 5.2 ± 0.3 HU for Half-Fan Head scans, and from 6.2 ± 0.1 HU to a range between 3.8 ± 0.1 and 2.0 ± 0.2 HU for Pelvis scans, with the iterative cone-beam computed tomography algorithm. Spatial resolution was marginally improved while results for uniformity and HU constancy were similar. For the head phantom study, noise was reduced from 43.6 HU to a range between 24.8 and 13.0 HU for a Full-Fan Head and from 35.1 HU to a range between 22.9 and 14.0 HU for a Half-Fan Head scan. The patient data study showed that artifacts due to photon starvation and streak artifacts were all reduced, and image noise in specified target regions were reduced to 62% ± 15% for 10 patients. CONCLUSION Noise and contrast-to-noise ratio image quality characteristics were significantly improved using the iterative cone-beam computed tomography reconstruction algorithm relative to the filtered back-projection-based cone-beam computed tomography method. These improvements will enhance the accuracy of cone-beam computed tomography-based image-guided applications.
Collapse
Affiliation(s)
- Weihua Mao
- Department of Radiation Oncology, Henry Ford Health System, Detroit, MI, USA
| | - Chang Liu
- Department of Radiation Oncology, Henry Ford Health System, Detroit, MI, USA
| | - Stephen J. Gardner
- Department of Radiation Oncology, Henry Ford Health System, Detroit, MI, USA
| | - Farzan Siddiqui
- Department of Radiation Oncology, Henry Ford Health System, Detroit, MI, USA
| | - Karen C. Snyder
- Department of Radiation Oncology, Henry Ford Health System, Detroit, MI, USA
| | - Akila Kumarasiri
- Department of Radiation Oncology, Henry Ford Health System, Detroit, MI, USA
| | - Bo Zhao
- Department of Radiation Oncology, Henry Ford Health System, Detroit, MI, USA
| | - Joshua Kim
- Department of Radiation Oncology, Henry Ford Health System, Detroit, MI, USA
| | - Ning Winston Wen
- Department of Radiation Oncology, Henry Ford Health System, Detroit, MI, USA
| | - Benjamin Movsas
- Department of Radiation Oncology, Henry Ford Health System, Detroit, MI, USA
| | - Indrin J. Chetty
- Department of Radiation Oncology, Henry Ford Health System, Detroit, MI, USA
| |
Collapse
|
7
|
Liu Y, Tao X, Ma J, Bian Z, Zeng D, Feng Q, Chen W, Zhang H. Motion guided Spatiotemporal Sparsity for high quality 4D-CBCT reconstruction. Sci Rep 2017; 7:17461. [PMID: 29234074 PMCID: PMC5727071 DOI: 10.1038/s41598-017-17668-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2017] [Accepted: 11/29/2017] [Indexed: 11/25/2022] Open
Abstract
Conventional cone-beam computed tomography is often deteriorated by respiratory motion blur, which negatively affects target delineation. On the other side, the four dimensional cone-beam computed tomography (4D-CBCT) can be considered to describe tumor and organ motion. But for current on-board CBCT imaging system, the slow rotation speed limits the projection number at each phase, and the associated reconstructions are contaminated by noise and streak artifacts using the conventional algorithm. To address the problem, we propose a novel framework to reconstruct 4D-CBCT from the under-sampled measurements—Motion guided Spatiotemporal Sparsity (MgSS). In this algorithm, we try to divide the CBCT images at each phase into cubes (3D blocks) and track the cubes with estimated motion field vectors through phase, then apply regional spatiotemporal sparsity on the tracked cubes. Specifically, we recast the tracked cubes into four-dimensional matrix, and use the higher order singular value decomposition (HOSVD) technique to analyze the regional spatiotemporal sparsity. Subsequently, the blocky spatiotemporal sparsity is incorporated into a cost function for the image reconstruction. The phantom simulation and real patient data are used to evaluate this algorithm. Results show that the MgSS algorithm achieved improved 4D-CBCT image quality with less noise and artifacts compared to the conventional algorithms.
Collapse
Affiliation(s)
- Yang Liu
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, China
| | - Xi Tao
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, China
| | - Jianhua Ma
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, China
| | - Zhaoying Bian
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, China
| | - Dong Zeng
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, China
| | - Qianjin Feng
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, China
| | - Wufan Chen
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, China.
| | - Hua Zhang
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, China.
| |
Collapse
|
8
|
Biguri A, Dosanjh M, Hancock S, Soleimani M. A general method for motion compensation in x-ray computed tomography. ACTA ACUST UNITED AC 2017; 62:6532-6549. [DOI: 10.1088/1361-6560/aa7675] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
9
|
Zhang Y, Ma J, Iyengar P, Zhong Y, Wang J. A new CT reconstruction technique using adaptive deformation recovery and intensity correction (ADRIC). Med Phys 2017; 44:2223-2241. [PMID: 28380247 DOI: 10.1002/mp.12259] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2016] [Revised: 03/26/2017] [Accepted: 03/30/2017] [Indexed: 11/06/2022] Open
Abstract
PURPOSE Sequential same-patient CT images may involve deformation-induced and non-deformation-induced voxel intensity changes. An adaptive deformation recovery and intensity correction (ADRIC) technique was developed to improve the CT reconstruction accuracy, and to separate deformation from non-deformation-induced voxel intensity changes between sequential CT images. MATERIALS AND METHODS ADRIC views the new CT volume as a deformation of a prior high-quality CT volume, but with additional non-deformation-induced voxel intensity changes. ADRIC first applies the 2D-3D deformation technique to recover the deformation field between the prior CT volume and the new, to-be-reconstructed CT volume. Using the deformation-recovered new CT volume, ADRIC further corrects the non-deformation-induced voxel intensity changes with an updated algebraic reconstruction technique ("ART-dTV"). The resulting intensity-corrected new CT volume is subsequently fed back into the 2D-3D deformation process to further correct the residual deformation errors, which forms an iterative loop. By ADRIC, the deformation field and the non-deformation voxel intensity corrections are optimized separately and alternately to reconstruct the final CT. CT myocardial perfusion imaging scenarios were employed to evaluate the efficacy of ADRIC, using both simulated data of the extended-cardiac-torso (XCAT) digital phantom and experimentally acquired porcine data. The reconstruction accuracy of the ADRIC technique was compared to the technique using ART-dTV alone, and to the technique using 2D-3D deformation alone. The relative error metric and the universal quality index metric are calculated between the images for quantitative analysis. The relative error is defined as the square root of the sum of squared voxel intensity differences between the reconstructed volume and the "ground-truth" volume, normalized by the square root of the sum of squared "ground-truth" voxel intensities. In addition to the XCAT and porcine studies, a physical lung phantom measurement study was also conducted. Water-filled balloons with various shapes/volumes and concentrations of iodinated contrasts were put inside the phantom to simulate both deformations and non-deformation-induced intensity changes for ADRIC reconstruction. The ADRIC-solved deformations and intensity changes from limited-view projections were compared to those of the "gold-standard" volumes reconstructed from fully sampled projections. RESULTS For the XCAT simulation study, the relative errors of the reconstructed CT volume by the 2D-3D deformation technique, the ART-dTV technique, and the ADRIC technique were 14.64%, 19.21%, and 11.90% respectively, by using 20 projections for reconstruction. Using 60 projections for reconstruction reduced the relative errors to 12.33%, 11.04%, and 7.92% for the three techniques, respectively. For the porcine study, the corresponding results were 13.61%, 8.78%, and 6.80% by using 20 projections; and 12.14%, 6.91%, and 5.29% by using 60 projections. The ADRIC technique also demonstrated robustness to varying projection exposure levels. For the physical phantom study, the average DICE coefficient between the initial prior balloon volume and the new "gold-standard" balloon volumes was 0.460. ADRIC reconstruction by 21 projections increased the average DICE coefficient to 0.954. CONCLUSION The ADRIC technique outperformed both the 2D-3D deformation technique and the ART-dTV technique in reconstruction accuracy. The alternately solved deformation field and non-deformation voxel intensity corrections can benefit multiple clinical applications, including tumor tracking, radiotherapy dose accumulation, and treatment outcome analysis.
Collapse
Affiliation(s)
- You Zhang
- Department of Radiation Oncology, UT Southwestern Medical Center at Dallas, Dallas, TX, 75390, USA
| | - Jianhua Ma
- Department of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, 510515, China
| | - Puneeth Iyengar
- Department of Radiation Oncology, UT Southwestern Medical Center at Dallas, Dallas, TX, 75390, USA
| | - Yuncheng Zhong
- Department of Radiation Oncology, UT Southwestern Medical Center at Dallas, Dallas, TX, 75390, USA
| | - Jing Wang
- Department of Radiation Oncology, UT Southwestern Medical Center at Dallas, Dallas, TX, 75390, USA
| |
Collapse
|
10
|
3D-2D Deformable Image Registration Using Feature-Based Nonuniform Meshes. BIOMED RESEARCH INTERNATIONAL 2016; 2016:4382854. [PMID: 27019849 PMCID: PMC4785510 DOI: 10.1155/2016/4382854] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/11/2015] [Accepted: 12/28/2015] [Indexed: 11/17/2022]
Abstract
By using prior information of planning CT images and feature-based nonuniform meshes, this paper demonstrates that volumetric images can be efficiently registered with a very small portion of 2D projection images of a Cone-Beam Computed Tomography (CBCT) scan. After a density field is computed based on the extracted feature edges from planning CT images, nonuniform tetrahedral meshes will be automatically generated to better characterize the image features according to the density field; that is, finer meshes are generated for features. The displacement vector fields (DVFs) are specified at the mesh vertices to drive the deformation of original CT images. Digitally reconstructed radiographs (DRRs) of the deformed anatomy are generated and compared with corresponding 2D projections. DVFs are optimized to minimize the objective function including differences between DRRs and projections and the regularity. To further accelerate the above 3D-2D registration, a procedure to obtain good initial deformations by deforming the volume surface to match 2D body boundary on projections has been developed. This complete method is evaluated quantitatively by using several digital phantoms and data from head and neck cancer patients. The feature-based nonuniform meshing method leads to better results than either uniform orthogonal grid or uniform tetrahedral meshes.
Collapse
|
11
|
Greenberg AM. Cone beam computed tomography scanning and diagnosis for dental implants. Oral Maxillofac Surg Clin North Am 2016; 27:185-202. [PMID: 25951956 DOI: 10.1016/j.coms.2015.01.002] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Cone beam computed tomography (CBCT) has become an important new technology for oral and maxillofacial surgery practitioners. CBCT provides improved office-based diagnostic capability and applications for surgical procedures, such as CT guidance through the use of computer-generated drill guides. A thorough knowledge of the basic science of CBCT as well as the ability to interpret the images correctly and thoroughly is essential to current practice.
Collapse
Affiliation(s)
- Alex M Greenberg
- Oral and Maxillofacial Surgery, Columbia University College of Dental Medicine, 630 W. 168th Street, New York, NY 10032, USA; Private Practice Limited to Oral and Maxillofacial Surgery, 18 East 48th Street Suite 1702, New York, NY 10017, USA.
| |
Collapse
|
12
|
Kurz C, Dedes G, Resch A, Reiner M, Ganswindt U, Nijhuis R, Thieke C, Belka C, Parodi K, Landry G. Comparing cone-beam CT intensity correction methods for dose recalculation in adaptive intensity-modulated photon and proton therapy for head and neck cancer. Acta Oncol 2015. [PMID: 26198654 DOI: 10.3109/0284186x.2015.1061206] [Citation(s) in RCA: 77] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
BACKGROUND Adaptive intensity-modulated photon and proton radiotherapy (IMRT and IMPT) of head and neck (H&N) cancer requires frequent three-dimensional (3D) dose calculation. We compared two approaches for dose recalculation on the basis of intensity-corrected cone-beam (CB) x-ray computed tomography (CT) images. MATERIAL AND METHODS For nine H&N tumor patients, virtual CTs (vCT) were generated by deformable image registration of the planning CT (pCT) to the CBCT. The second intensity correction approach used population-based lookup tables for scaling CBCT intensities to the pCT HU range (CBCTLUT). IMRT and IMPT plans were generated with a commercial treatment planning system. Dose recalculations on vCT and CBCTLUT were analyzed using a (3%, 3 mm) gamma-index analysis and comparison of normal tissue and tumor dose/volume parameters. A replanning CT (rpCT) acquired within three days of the CBCT served as reference. Single field uniform dose (SFUD) proton plans were created and recalculated on vCT and CBCTLUT for proton range comparison. RESULTS Dose/volume parameters showed minor differences between rpCT, vCT and CBCTLUT in IMRT, but clinically relevant deviations between CBCTLUT and rpCT in the spinal cord for IMPT. Gamma-index pass-rates were found increased for vCT with respect to CBCTLUT in IMPT (by up to 21 percentage points) and IMRT (by up to 9 percentage points) for most cases. The SFUD-based proton range assessment showed improved agreement of vCT and rpCT, with 88-99% of the depth dose profiles in beam's eye view agreeing within 3 mm. For CBCTLUT, only 80-94% of the profiles fulfilled this criterion. CONCLUSION vCT and CBCTLUT are suitable options for dose recalculation in adaptive IMRT. In the scope of IMPT, the vCT approach is preferable.
Collapse
Affiliation(s)
- Christopher Kurz
- a Department of Radiation Oncology , Ludwig-Maximilians-University , Munich , Germany
- b Department of Medical Physics , Ludwig-Maximilians-University , Munich , Germany
| | - George Dedes
- b Department of Medical Physics , Ludwig-Maximilians-University , Munich , Germany
| | - Andreas Resch
- b Department of Medical Physics , Ludwig-Maximilians-University , Munich , Germany
| | - Michael Reiner
- a Department of Radiation Oncology , Ludwig-Maximilians-University , Munich , Germany
| | - Ute Ganswindt
- a Department of Radiation Oncology , Ludwig-Maximilians-University , Munich , Germany
| | - Reinoud Nijhuis
- a Department of Radiation Oncology , Ludwig-Maximilians-University , Munich , Germany
| | - Christian Thieke
- a Department of Radiation Oncology , Ludwig-Maximilians-University , Munich , Germany
| | - Claus Belka
- a Department of Radiation Oncology , Ludwig-Maximilians-University , Munich , Germany
| | - Katia Parodi
- b Department of Medical Physics , Ludwig-Maximilians-University , Munich , Germany
| | - Guillaume Landry
- a Department of Radiation Oncology , Ludwig-Maximilians-University , Munich , Germany
- b Department of Medical Physics , Ludwig-Maximilians-University , Munich , Germany
| |
Collapse
|
13
|
Zhang Y, Yin FF, Pan T, Vergalasova I, Ren L. Preliminary clinical evaluation of a 4D-CBCT estimation technique using prior information and limited-angle projections. Radiother Oncol 2015; 115:22-9. [PMID: 25818396 DOI: 10.1016/j.radonc.2015.02.022] [Citation(s) in RCA: 43] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2014] [Revised: 02/20/2015] [Accepted: 02/24/2015] [Indexed: 12/25/2022]
Abstract
BACKGROUND AND PURPOSE A technique has been previously reported to estimate high-quality 4D-CBCT using prior information and limited-angle projections. This study is to investigate its clinical feasibility through both phantom and patient studies. MATERIALS AND METHODS The new technique used to estimate 4D-CBCT is called MMFD-NCC. It is based on the previously reported motion modeling and free-form deformation (MMFD) method, with the introduction of normalized-cross-correlation (NCC) as a new similarity metric. The clinical feasibility of this technique was evaluated by assessing the accuracy of estimated anatomical structures in comparison to those in the 'ground-truth' reference 4D-CBCTs, using data obtained from a physical phantom and three lung cancer patients. Both volume percentage error (VPE) and center-of-mass error (COME) of the estimated tumor volume were used as the evaluation metrics. RESULTS The average VPE/COME of the tumor in the prior image was 257.1%/10.1 mm for the phantom study and 55.6%/3.8 mm for the patient study. Using only orthogonal-view 30° projections, the MMFD-NCC has reduced the corresponding values to 7.7%/1.2 mm and 9.6%/1.1 mm, respectively. CONCLUSION The MMFD-NCC technique is able to estimate 4D-CBCT images with geometrical accuracy of the tumor within 10% VPE and 2 mm COME, which can be used to improve the localization accuracy of radiotherapy.
Collapse
Affiliation(s)
- You Zhang
- Medical Physics Graduate Program, Duke University, Durham, USA.
| | - Fang-Fang Yin
- Medical Physics Graduate Program, Duke University, Durham, USA; Department of Radiation Oncology, Duke University Medical Center, Durham, USA
| | - Tinsu Pan
- Department of Imaging Physics, The University of Texas, MD Anderson Cancer Center, Houston, USA
| | - Irina Vergalasova
- Department of Radiation Oncology, Duke University Medical Center, Durham, USA
| | - Lei Ren
- Medical Physics Graduate Program, Duke University, Durham, USA; Department of Radiation Oncology, Duke University Medical Center, Durham, USA
| |
Collapse
|
14
|
Yan H, Zhen X, Folkerts M, Li Y, Pan T, Cervino L, Jiang SB, Jia X. A hybrid reconstruction algorithm for fast and accurate 4D cone-beam CT imaging. Med Phys 2015; 41:071903. [PMID: 24989381 DOI: 10.1118/1.4881326] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
Abstract
PURPOSE 4D cone beam CT (4D-CBCT) has been utilized in radiation therapy to provide 4D image guidance in lung and upper abdomen area. However, clinical application of 4D-CBCT is currently limited due to the long scan time and low image quality. The purpose of this paper is to develop a new 4D-CBCT reconstruction method that restores volumetric images based on the 1-min scan data acquired with a standard 3D-CBCT protocol. METHODS The model optimizes a deformation vector field that deforms a patient-specific planning CT (p-CT), so that the calculated 4D-CBCT projections match measurements. A forward-backward splitting (FBS) method is invented to solve the optimization problem. It splits the original problem into two well-studied subproblems, i.e., image reconstruction and deformable image registration. By iteratively solving the two subproblems, FBS gradually yields correct deformation information, while maintaining high image quality. The whole workflow is implemented on a graphic-processing-unit to improve efficiency. Comprehensive evaluations have been conducted on a moving phantom and three real patient cases regarding the accuracy and quality of the reconstructed images, as well as the algorithm robustness and efficiency. RESULTS The proposed algorithm reconstructs 4D-CBCT images from highly under-sampled projection data acquired with 1-min scans. Regarding the anatomical structure location accuracy, 0.204 mm average differences and 0.484 mm maximum difference are found for the phantom case, and the maximum differences of 0.3-0.5 mm for patients 1-3 are observed. As for the image quality, intensity errors below 5 and 20 HU compared to the planning CT are achieved for the phantom and the patient cases, respectively. Signal-noise-ratio values are improved by 12.74 and 5.12 times compared to results from FDK algorithm using the 1-min data and 4-min data, respectively. The computation time of the algorithm on a NVIDIA GTX590 card is 1-1.5 min per phase. CONCLUSIONS High-quality 4D-CBCT imaging based on the clinically standard 1-min 3D CBCT scanning protocol is feasible via the proposed hybrid reconstruction algorithm.
Collapse
Affiliation(s)
- Hao Yan
- Department of Radiation Oncology, The University of Texas, Southwestern Medical Center, Dallas, Texas 75390
| | - Xin Zhen
- Department of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, China
| | - Michael Folkerts
- Department of Radiation Oncology, The University of Texas, Southwestern Medical Center, Dallas, Texas 75390
| | - Yongbao Li
- Department of Radiation Oncology, The University of Texas, Southwestern Medical Center, Dallas, Texas 75390 and Department of Engineering Physics, Tsinghua University, Beijing 100084, China
| | - Tinsu Pan
- Department of Imaging Physics, The University of Texas, MD Anderson Cancer Center, Houston, Texas 77030
| | - Laura Cervino
- Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, California 92093
| | - Steve B Jiang
- Department of Radiation Oncology, The University of Texas, Southwestern Medical Center, Dallas, Texas 75390
| | - Xun Jia
- Department of Radiation Oncology, The University of Texas, Southwestern Medical Center, Dallas, Texas 75390
| |
Collapse
|
15
|
Staub D, Murphy MJ. A digitally reconstructed radiograph algorithm calculated from first principles. Med Phys 2013; 40:011902. [PMID: 23298093 DOI: 10.1118/1.4769413] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022] Open
Abstract
PURPOSE To develop an algorithm for computing realistic digitally reconstructed radiographs (DRRs) that match real cone-beam CT (CBCT) projections with no artificial adjustments. METHODS The authors used measured attenuation data from cone-beam CT projection radiographs of different materials to obtain a function to convert CT number to linear attenuation coefficient (LAC). The effects of scatter, beam hardening, and veiling glare were first removed from the attenuation data. Using this conversion function the authors calculated the line integral of LAC through a CT along rays connecting the radiation source and detector pixels with a ray-tracing algorithm, producing raw DRRs. The effects of scatter, beam hardening, and veiling glare were then included in the DRRs through postprocessing. RESULTS The authors compared actual CBCT projections to DRRs produced with all corrections (scatter, beam hardening, and veiling glare) and to uncorrected DRRs. Algorithm accuracy was assessed through visual comparison of projections and DRRs, pixel intensity comparisons, intensity histogram comparisons, and correlation plots of DRR-to-projection pixel intensities. In general, the fully corrected algorithm provided a small but nontrivial improvement in accuracy over the uncorrected algorithm. The authors also investigated both measurement- and computation-based methods for determining the beam hardening correction, and found the computation-based method to be superior, as it accounted for nonuniform bowtie filter thickness. The authors benchmarked the algorithm for speed and found that it produced DRRs in about 0.35 s for full detector and CT resolution at a ray step-size of 0.5 mm. CONCLUSIONS The authors have demonstrated a DRR algorithm calculated from first principles that accounts for scatter, beam hardening, and veiling glare in order to produce accurate DRRs. The algorithm is computationally efficient, making it a good candidate for iterative CT reconstruction techniques that require a data fidelity term based on the matching of DRRs and projections.
Collapse
Affiliation(s)
- David Staub
- Department of Radiation Oncology, Virginia Commonwealth University, Richmond, VA 23298, USA.
| | | |
Collapse
|
16
|
Hugo GD, Rosu M. Advances in 4D radiation therapy for managing respiration: part I - 4D imaging. Z Med Phys 2012; 22:258-71. [PMID: 22784929 PMCID: PMC4153750 DOI: 10.1016/j.zemedi.2012.06.009] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2012] [Revised: 06/14/2012] [Accepted: 06/18/2012] [Indexed: 11/21/2022]
Abstract
Techniques for managing respiration during imaging and planning of radiation therapy are reviewed, concentrating on free-breathing (4D) approaches. First, we focus on detailing the historical development and basic operational principles of currently-available "first generation" 4D imaging modalities: 4D computed tomography, 4D cone beam computed tomography, 4D magnetic resonance imaging, and 4D positron emission tomography. Features and limitations of these first generation systems are described, including necessity of breathing surrogates for 4D image reconstruction, assumptions made in acquisition and reconstruction about the breathing pattern, and commonly-observed artifacts. Both established and developmental methods to deal with these limitations are detailed. Finally, strategies to construct 4D targets and images and, alternatively, to compress 4D information into static targets and images for radiation therapy planning are described.
Collapse
Affiliation(s)
- Geoffrey D Hugo
- Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia 23298, USA.
| | | |
Collapse
|
17
|
Ren L, Chetty IJ, Zhang J, Jin JY, Wu QJ, Yan H, Brizel DM, Lee WR, Movsas B, Yin FF. Development and Clinical Evaluation of a Three-Dimensional Cone-Beam Computed Tomography Estimation Method Using a Deformation Field Map. Int J Radiat Oncol Biol Phys 2012; 82:1584-93. [PMID: 21477945 DOI: 10.1016/j.ijrobp.2011.02.002] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2009] [Revised: 01/28/2011] [Accepted: 02/02/2011] [Indexed: 11/26/2022]
|
18
|
Lee H, Xing L, Davidi R, Li R, Qian J, Lee R. Improved compressed sensing-based cone-beam CT reconstruction using adaptive prior image constraints. Phys Med Biol 2012; 57:2287-307. [PMID: 22460008 DOI: 10.1088/0031-9155/57/8/2287] [Citation(s) in RCA: 59] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
Volumetric cone-beam CT (CBCT) images are acquired repeatedly during a course of radiation therapy and a natural question to ask is whether CBCT images obtained earlier in the process can be utilized as prior knowledge to reduce patient imaging dose in subsequent scans. The purpose of this work is to develop an adaptive prior image constrained compressed sensing (APICCS) method to solve this problem. Reconstructed images using full projections are taken on the first day of radiation therapy treatment and are used as prior images. The subsequent scans are acquired using a protocol of sparse projections. In the proposed APICCS algorithm, the prior images are utilized as an initial guess and are incorporated into the objective function in the compressed sensing (CS)-based iterative reconstruction process. Furthermore, the prior information is employed to detect any possible mismatched regions between the prior and current images for improved reconstruction. For this purpose, the prior images and the reconstructed images are classified into three anatomical regions: air, soft tissue and bone. Mismatched regions are identified by local differences of the corresponding groups in the two classified sets of images. A distance transformation is then introduced to convert the information into an adaptive voxel-dependent relaxation map. In constructing the relaxation map, the matched regions (unchanged anatomy) between the prior and current images are assigned with smaller weight values, which are translated into less influence on the CS iterative reconstruction process. On the other hand, the mismatched regions (changed anatomy) are associated with larger values and the regions are updated more by the new projection data, thus avoiding any possible adverse effects of prior images. The APICCS approach was systematically assessed by using patient data acquired under standard and low-dose protocols for qualitative and quantitative comparisons. The APICCS method provides an effective way for us to enhance the image quality at the matched regions between the prior and current images compared to the existing PICCS algorithm. Compared to the current CBCT imaging protocols, the APICCS algorithm allows an imaging dose reduction of 10-40 times due to the greatly reduced number of projections and lower x-ray tube current level coming from the low-dose protocol.
Collapse
Affiliation(s)
- Ho Lee
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305-5847, USA.
| | | | | | | | | | | |
Collapse
|
19
|
Staub D, Docef A, Brock RS, Vaman C, Murphy MJ. 4D Cone-beam CT reconstruction using a motion model based on principal component analysis. Med Phys 2012; 38:6697-709. [PMID: 22149852 DOI: 10.1118/1.3662895] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE To provide a proof of concept validation of a novel 4D cone-beam CT (4DCBCT) reconstruction algorithm and to determine the best methods to train and optimize the algorithm. METHODS The algorithm animates a patient fan-beam CT (FBCT) with a patient specific parametric motion model in order to generate a time series of deformed CTs (the reconstructed 4DCBCT) that track the motion of the patient anatomy on a voxel by voxel scale. The motion model is constrained by requiring that projections cast through the deformed CT time series match the projections of the raw patient 4DCBCT. The motion model uses a basis of eigenvectors that are generated via principal component analysis (PCA) of a training set of displacement vector fields (DVFs) that approximate patient motion. The eigenvectors are weighted by a parameterized function of the patient breathing trace recorded during 4DCBCT. The algorithm is demonstrated and tested via numerical simulation. RESULTS The algorithm is shown to produce accurate reconstruction results for the most complicated simulated motion, in which voxels move with a pseudo-periodic pattern and relative phase shifts exist between voxels. The tests show that principal component eigenvectors trained on DVFs from a novel 2D/3D registration method give substantially better results than eigenvectors trained on DVFs obtained by conventionally registering 4DCBCT phases reconstructed via filtered backprojection. CONCLUSIONS Proof of concept testing has validated the 4DCBCT reconstruction approach for the types of simulated data considered. In addition, the authors found the 2D/3D registration approach to be our best choice for generating the DVF training set, and the Nelder-Mead simplex algorithm the most robust optimization routine.
Collapse
Affiliation(s)
- David Staub
- Department of Radiation Oncology, Virignia Commonwealth University, Richmond, Virginia 23298, USA.
| | | | | | | | | |
Collapse
|