1
|
Lu A, Huang H, Hu Y, Zbijewski W, Unberath M, Siewerdsen JH, Weiss CR, Sisniega A. Vessel-targeted compensation of deformable motion in interventional cone-beam CT. Med Image Anal 2024; 97:103254. [PMID: 38968908 DOI: 10.1016/j.media.2024.103254] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 06/01/2024] [Accepted: 06/24/2024] [Indexed: 07/07/2024]
Abstract
The present standard of care for unresectable liver cancer is transarterial chemoembolization (TACE), which involves using chemotherapeutic particles to selectively embolize the arteries supplying hepatic tumors. Accurate volumetric identification of intricate fine vascularity is crucial for selective embolization. Three-dimensional imaging, particularly cone-beam CT (CBCT), aids in visualization and targeting of small vessels in such highly variable anatomy, but long image acquisition time results in intra-scan patient motion, which distorts vascular structures and tissue boundaries. To improve clarity of vascular anatomy and intra-procedural utility, this work proposes a targeted motion estimation and compensation framework that removes the need for any prior information or external tracking and for user interaction. Motion estimation is performed in two stages: (i) a target identification stage that segments arteries and catheters in the projection domain using a multi-view convolutional neural network to construct a coarse 3D vascular mask; and (ii) a targeted motion estimation stage that iteratively solves for the time-varying motion field via optimization of a vessel-enhancing objective function computed over the target vascular mask. The vessel-enhancing objective is derived through eigenvalues of the local image Hessian to emphasize bright tubular structures. Motion compensation is achieved via spatial transformer operators that apply time-dependent deformations to partial angle reconstructions, allowing efficient minimization via gradient backpropagation. The framework was trained and evaluated in anatomically realistic simulated motion-corrupted CBCTs mimicking TACE of hepatic tumors, at intermediate (3.0 mm) and large (6.0 mm) motion magnitudes. Motion compensation substantially improved median vascular DICE score (from 0.30 to 0.59 for large motion), image SSIM (from 0.77 to 0.93 for large motion), and vessel sharpness (0.189 mm-1 to 0.233 mm-1 for large motion) in simulated cases. Motion compensation also demonstrated increased vessel sharpness (0.188 mm-1 before to 0.205 mm-1 after) and reconstructed vessel length (median increased from 37.37 to 41.00 mm) on a clinical interventional CBCT. The proposed anatomy-aware motion compensation framework presented a promising approach for improving the utility of CBCT for intra-procedural vascular imaging, facilitating selective embolization procedures.
Collapse
Affiliation(s)
- Alexander Lu
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, Traylor Research Building, #622 720 Rutland Avenue Baltimore MD 21205, USA
| | - Heyuan Huang
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, Traylor Research Building, #622 720 Rutland Avenue Baltimore MD 21205, USA
| | - Yicheng Hu
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Wojciech Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, Traylor Research Building, #622 720 Rutland Avenue Baltimore MD 21205, USA
| | - Mathias Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Jeffrey H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, Traylor Research Building, #622 720 Rutland Avenue Baltimore MD 21205, USA; Departments of Imaging Physics, Radiation Physics, and Neurosurgery, The University of Texas M.D. Anderson Cancer Center, TX, USA
| | - Clifford R Weiss
- Department of Radiology, Johns Hopkins University, Baltimore, MD, USA
| | - Alejandro Sisniega
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, Traylor Research Building, #622 720 Rutland Avenue Baltimore MD 21205, USA.
| |
Collapse
|
2
|
Lin Z, Wang Y, Bian Z, Ma J. [A deep blur learning-based motion artifact reduction algorithm for dental cone-beam computed tomography images]. NAN FANG YI KE DA XUE XUE BAO = JOURNAL OF SOUTHERN MEDICAL UNIVERSITY 2024; 44:1198-1208. [PMID: 38977351 PMCID: PMC11237304 DOI: 10.12122/j.issn.1673-4254.2024.06.22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 07/10/2024]
Abstract
OBJECTIVE We propose a motion artifact correction algorithm (DMBL) for reducing motion artifacts in reconstructed dental cone-beam computed tomography (CBCT) images based on deep blur learning. METHODS A blur encoder was used to extract motion-related degradation features to model the degradation process caused by motion, and the obtained motion degradation features were imported in the artifact correction module for artifact removal. The artifact correction module adopts a joint learning framework for image blur removal and image blur simulation for treatment of spatially varying and random motion patterns. Comparative experiments were conducted to verify the effectiveness of the proposed method using both simulated motion data sets and clinical data sets. RESULTS The experimental results with the simulated dataset showed that compared with the existing methods, the PSNR of the proposed method increased by 2.88%, the SSIM increased by 0.89%, and the RMSE decreased by 10.58%. The results with the clinical dataset showed that the proposed method achieved the highest expert level with a subjective image quality score of 4.417 (in a 5-point scale), significantly higher than those of the comparison methods. CONCLUSION The proposed DMBL algorithm with a deep blur joint learning network structure can effectively reduce motion artifacts in dental CBCT images and achieve high-quality image restoration.
Collapse
Affiliation(s)
- Z Lin
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Y Wang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Z Bian
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - J Ma
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| |
Collapse
|
3
|
Huang H, Liu Y, Siewerdsen JH, Lu A, Hu Y, Zbijewski W, Unberath M, Weiss CR, Sisniega A. Deformable motion compensation in interventional cone-beam CT with a context-aware learned autofocus metric. Med Phys 2024; 51:4158-4180. [PMID: 38733602 DOI: 10.1002/mp.17125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2023] [Revised: 04/02/2024] [Accepted: 05/03/2024] [Indexed: 05/13/2024] Open
Abstract
PURPOSE Interventional Cone-Beam CT (CBCT) offers 3D visualization of soft-tissue and vascular anatomy, enabling 3D guidance of abdominal interventions. However, its long acquisition time makes CBCT susceptible to patient motion. Image-based autofocus offers a suitable platform for compensation of deformable motion in CBCT, but it relies on handcrafted motion metrics based on first-order image properties and that lack awareness of the underlying anatomy. This work proposes a data-driven approach to motion quantification via a learned, context-aware, deformable metric,VI F D L ${\bm{VI}}{{\bm{F}}}_{DL}$ , that quantifies the amount of motion degradation as well as the realism of the structural anatomical content in the image. METHODS The proposedVI F D L ${\bm{VI}}{{\bm{F}}}_{DL}$ was modeled as a deep convolutional neural network (CNN) trained to recreate a reference-based structural similarity metric-visual information fidelity (VIF). The deep CNN acted on motion-corrupted images, providing an estimation of the spatial VIF map that would be obtained against a motion-free reference, capturing motion distortion, and anatomic plausibility. The deep CNN featured a multi-branch architecture with a high-resolution branch for estimation of voxel-wise VIF on a small volume of interest. A second contextual, low-resolution branch provided features associated to anatomical context for disentanglement of motion effects and anatomical appearance. The deep CNN was trained on paired motion-free and motion-corrupted data obtained with a high-fidelity forward projection model for a protocol involving 120 kV and 9.90 mGy. The performance ofVI F D L ${\bm{VI}}{{\bm{F}}}_{DL}$ was evaluated via metrics of correlation with ground truth VIF ${\bm{VIF}}$ and with the underlying deformable motion field in simulated data with deformable motion fields with amplitude ranging from 5 to 20 mm and frequency from 2.4 up to 4 cycles/scan. Robustness to variation in tissue contrast and noise levels was assessed in simulation studies with varying beam energy (90-120 kV) and dose (1.19-39.59 mGy). Further validation was obtained on experimental studies with a deformable phantom. Final validation was obtained via integration ofVI F D L ${\bm{VI}}{{\bm{F}}}_{DL}$ on an autofocus compensation framework, applied to motion compensation on experimental datasets and evaluated via metric of spatial resolution on soft-tissue boundaries and sharpness of contrast-enhanced vascularity. RESULTS The magnitude and spatial map ofVI F D L ${\bm{VI}}{{\bm{F}}}_{DL}$ showed consistent and high correlation levels with the ground truth in both simulation and real data, yielding average normalized cross correlation (NCC) values of 0.95 and 0.88, respectively. Similarly,VI F D L ${\bm{VI}}{{\bm{F}}}_{DL}$ achieved good correlation values with the underlying motion field, with average NCC of 0.90. In experimental phantom studies,VI F D L ${\bm{VI}}{{\bm{F}}}_{DL}$ properly reflects the change in motion amplitudes and frequencies: voxel-wise averaging of the localVI F D L ${\bm{VI}}{{\bm{F}}}_{DL}$ across the full reconstructed volume yielded an average value of 0.69 for the case with mild motion (2 mm, 12 cycles/scan) and 0.29 for the case with severe motion (12 mm, 6 cycles/scan). Autofocus motion compensation usingVI F D L ${\bm{VI}}{{\bm{F}}}_{DL}$ resulted in noticeable mitigation of motion artifacts and improved spatial resolution of soft tissue and high-contrast structures, resulting in reduction of edge spread function width of 8.78% and 9.20%, respectively. Motion compensation also increased the conspicuity of contrast-enhanced vascularity, reflected in an increase of 9.64% in vessel sharpness. CONCLUSION The proposedVI F D L ${\bm{VI}}{{\bm{F}}}_{DL}$ , featuring a novel context-aware architecture, demonstrated its capacity as a reference-free surrogate of structural similarity to quantify motion-induced degradation of image quality and anatomical plausibility of image content. The validation studies showed robust performance across motion patterns, x-ray techniques, and anatomical instances. The proposed anatomy- and context-aware metric poses a powerful alternative to conventional motion estimation metrics, and a step forward for application of deep autofocus motion compensation for guidance in clinical interventional procedures.
Collapse
Affiliation(s)
- Heyuan Huang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Yixuan Liu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Jeffrey H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Alexander Lu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Yicheng Hu
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Wojciech Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Mathias Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Clifford R Weiss
- Department of Radiology, Johns Hopkins University, Baltimore, Maryland, USA
| | - Alejandro Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| |
Collapse
|
4
|
Lauria M, Miller C, Singhrao K, Lewis J, Lin W, O'Connell D, Naumann L, Stiehl B, Santhanam A, Boyle P, Raldow AC, Goldin J, Barjaktarevic I, Low DA. Motion compensated cone-beam CT reconstruction using an a priorimotion model from CT simulation: a pilot study. Phys Med Biol 2024; 69:075022. [PMID: 38452385 DOI: 10.1088/1361-6560/ad311b] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 03/07/2024] [Indexed: 03/09/2024]
Abstract
Objective. To combat the motion artifacts present in traditional 4D-CBCT reconstruction, an iterative technique known as the motion-compensated simultaneous algebraic reconstruction technique (MC-SART) was previously developed. MC-SART employs a 4D-CBCT reconstruction to obtain an initial model, which suffers from a lack of sufficient projections in each bin. The purpose of this study is to demonstrate the feasibility of introducing a motion model acquired during CT simulation to MC-SART, coined model-based CBCT (MB-CBCT).Approach. For each of 5 patients, we acquired 5DCTs during simulation and pre-treatment CBCTs with a simultaneous breathing surrogate. We cross-calibrated the 5DCT and CBCT breathing waveforms by matching the diaphragms and employed the 5DCT motion model parameters for MC-SART. We introduced the Amplitude Reassignment Motion Modeling technique, which measures the ability of the model to control diaphragm sharpness by reassigning projection amplitudes with varying resolution. We evaluated the sharpness of tumors and compared them between MB-CBCT and 4D-CBCT. We quantified sharpness by fitting an error function across anatomical boundaries. Furthermore, we compared our MB-CBCT approach to the traditional MC-SART approach. We evaluated MB-CBCT's robustness over time by reconstructing multiple fractions for each patient and measuring consistency in tumor centroid locations between 4D-CBCT and MB-CBCT.Main results. We found that the diaphragm sharpness rose consistently with increasing amplitude resolution for 4/5 patients. We observed consistently high image quality across multiple fractions, and observed stable tumor centroids with an average 0.74 ± 0.31 mm difference between the 4D-CBCT and MB-CBCT. Overall, vast improvements over 3D-CBCT and 4D-CBCT were demonstrated by our MB-CBCT technique in terms of both diaphragm sharpness and overall image quality.Significance. This work is an important extension of the MC-SART technique. We demonstrated the ability ofa priori5DCT models to provide motion compensation for CBCT reconstruction. We showed improvements in image quality over both 4D-CBCT and the traditional MC-SART approach.
Collapse
Affiliation(s)
- Michael Lauria
- UCLA, Department of Radiation Oncology, Los Angeles, CA, United States of America
| | - Claudia Miller
- UCLA, Department of Radiation Oncology, Los Angeles, CA, United States of America
| | - Kamal Singhrao
- Brigham and Women's Hospital, Dana Farber Cancer Institute and Harvard Medical School, Department of Radiation Oncology, Boston, MA, United States of America
| | - John Lewis
- Cedars-Sinai Medical Center, Department of Radiation Oncology, Los Angeles, CA, United States of America
| | - Weicheng Lin
- UCLA, Department of Radiation Oncology, Los Angeles, CA, United States of America
| | - Dylan O'Connell
- UCLA, Department of Radiation Oncology, Los Angeles, CA, United States of America
| | - Louise Naumann
- UCLA, Department of Radiation Oncology, Los Angeles, CA, United States of America
| | - Bradley Stiehl
- Cedars-Sinai Medical Center, Department of Radiation Oncology, Los Angeles, CA, United States of America
| | - Anand Santhanam
- UCLA, Department of Radiation Oncology, Los Angeles, CA, United States of America
| | - Peter Boyle
- UCLA, Department of Radiation Oncology, Los Angeles, CA, United States of America
| | - Ann C Raldow
- UCLA, Department of Radiation Oncology, Los Angeles, CA, United States of America
| | - Jonathan Goldin
- UCLA, Department of Radiological Sciences, Los Angeles, CA, United States of America
| | - Igor Barjaktarevic
- UCLA, Department of Pulmonary and Critical Care Medicine, Los Angeles, CA, United States of America
| | - Daniel A Low
- UCLA, Department of Radiation Oncology, Los Angeles, CA, United States of America
| |
Collapse
|
5
|
Thies M, Wagner F, Maul N, Folle L, Meier M, Rohleder M, Schneider LS, Pfaff L, Gu M, Utz J, Denzinger F, Manhart M, Maier A. Gradient-based geometry learning for fan-beam CT reconstruction. Phys Med Biol 2023; 68:205004. [PMID: 37779386 DOI: 10.1088/1361-6560/acf90e] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Accepted: 09/12/2023] [Indexed: 10/03/2023]
Abstract
Objective.Incorporating computed tomography (CT) reconstruction operators into differentiable pipelines has proven beneficial in many applications. Such approaches usually focus on the projection data and keep the acquisition geometry fixed. However, precise knowledge of the acquisition geometry is essential for high quality reconstruction results. In this paper, the differentiable formulation of fan-beam CT reconstruction is extended to the acquisition geometry.Approach.The CT fan-beam reconstruction is analytically derived with respect to the acquisition geometry. This allows to propagate gradient information from a loss function on the reconstructed image into the geometry parameters. As a proof-of-concept experiment, this idea is applied to rigid motion compensation. The cost function is parameterized by a trained neural network which regresses an image quality metric from the motion-affected reconstruction alone.Main results.The algorithm improves the structural similarity index measure (SSIM) from 0.848 for the initial motion-affected reconstruction to 0.946 after compensation. It also generalizes to real fan-beam sinograms which are rebinned from a helical trajectory where the SSIM increases from 0.639 to 0.742.Significance.Using the proposed method, we are the first to optimize an autofocus-inspired algorithm based on analytical gradients. Next to motion compensation, we see further use cases of our differentiable method for scanner calibration or hybrid techniques employing deep models.
Collapse
Affiliation(s)
- Mareike Thies
- Pattern Recognition Lab, FAU Erlangen-Nürnberg, Germany
| | - Fabian Wagner
- Pattern Recognition Lab, FAU Erlangen-Nürnberg, Germany
| | - Noah Maul
- Pattern Recognition Lab, FAU Erlangen-Nürnberg, Germany
- Siemens Healthcare GmbH, Erlangen, Germany
| | - Lukas Folle
- Pattern Recognition Lab, FAU Erlangen-Nürnberg, Germany
| | - Manuela Meier
- Pattern Recognition Lab, FAU Erlangen-Nürnberg, Germany
- Siemens Healthcare GmbH, Erlangen, Germany
| | - Maximilian Rohleder
- Pattern Recognition Lab, FAU Erlangen-Nürnberg, Germany
- Siemens Healthcare GmbH, Erlangen, Germany
| | | | - Laura Pfaff
- Pattern Recognition Lab, FAU Erlangen-Nürnberg, Germany
- Siemens Healthcare GmbH, Erlangen, Germany
| | - Mingxuan Gu
- Pattern Recognition Lab, FAU Erlangen-Nürnberg, Germany
| | - Jonas Utz
- Department AIBE, FAU Erlangen-Nürnberg, Germany
| | - Felix Denzinger
- Pattern Recognition Lab, FAU Erlangen-Nürnberg, Germany
- Siemens Healthcare GmbH, Erlangen, Germany
| | | | - Andreas Maier
- Pattern Recognition Lab, FAU Erlangen-Nürnberg, Germany
| |
Collapse
|
6
|
Lu A, Huang H, Hu Y, Zbijewski W, Unberath M, Siewerdsen JH, Weiss CR, Sisniega A. Deformable Motion Compensation for Intraprocedural Vascular Cone-beam CT with Sequential Projection Domain Targeting and Vessel-Enhancing Autofocus. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2023; 12466:124660P. [PMID: 37937266 PMCID: PMC10629230 DOI: 10.1117/12.2652137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2023]
Abstract
Purpose Cone-beam CT (CBCT) is used in interventional radiology (IR) for identification of complex vascular anatomy, difficult to visualize in 2D fluoroscopy. However, long acquisition time makes CBCT susceptible to soft-tissue deformable motion that degrades visibility of fine vessels. We propose a targeted framework to compensate for deformable intra-scan motion via learned full-sequence models for identification of vascular anatomy coupled to an autofocus function specifically tailored to vascular imaging. Methods The vessel-targeted autofocus acts in two stages: (i) identification of vascular and catheter targets in the projection domain; and, (ii) autofocus optimization for a 4D vector field through an objective function that quantifies vascular visibility. Target identification is based on a deep learning model that operates on the complete sequence of projections, via a transformer encoder-decoder architecture that uses spatial-temporal self-attention modules to infer long-range feature correlations, enabling identification of vascular anatomy with highly variable conspicuity. The vascular autofocus function is derived through eigenvalues of the local image Hessian, which quantify the local image structure for identification of bright tubular structures. Motion compensation was achieved via spatial transformer operators that impart time dependent deformations to NPAR = 90 partial angle reconstructions, allowing for efficient minimization via gradient backpropagation. The framework was trained and evaluated in synthetic abdominal CBCTs obtained from liver MDCT volumes and including realistic models of contrast-enhanced vascularity with 15 to 30 end branches, 1 - 3.5 mm vessel diameter, and 1400 HU contrast. Results The targeted autofocus resulted in qualitative and quantitative improvement in vascular visibility in both simulated and clinical intra-procedural CBCT. The transformer-based target identification module resulted in superior detection of target vascularity and a lower number of false positives, compared to a baseline U-Net model acting on individual projection views, reflected as a 1.97x improvement in intersection-over-union values. Motion compensation in simulated data yielded improved conspicuity of vascular anatomy, and reduced streak artifacts and blurring around vessels, as well as recovery of shape distortion. These improvements amounted to an average 147% improvement in cross correlation computed against the motion-free ground truth, relative to the un-compensated reconstruction. Conclusion Targeted autofocus yielded improved visibility of vascular anatomy in abdominal CBCT, providing better potential for intra-procedural tracking of fine vascular anatomy in 3D images. The proposed method poses an efficient solution to motion compensation in task-specific imaging, with future application to a wider range of imaging scenarios.
Collapse
Affiliation(s)
- Alexander Lu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Heyuan Huang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Yicheng Hu
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Wojtek Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Mathias Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Jeffrey H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
- Departments of Imaging Physics, Neurosurgery, and Radiation Physics, The University of Texas M.D. Anderson Cancer Center, TX, USA
| | - Clifford R Weiss
- Department of Radiology, Johns Hopkins University, Baltimore, MD, USA
| | - Alejandro Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
7
|
Huang H, Siewerdsen JH, Lu A, Hu Y, Zbijewski W, Unberath M, Weiss CR, Sisniega A. Multi-Stage Adaptive Spline Autofocus (MASA) with a Learned Metric for Deformable Motion Compensation in Interventional Cone-Beam CT. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2023; 12463:1246314. [PMID: 37937146 PMCID: PMC10629227 DOI: 10.1117/12.2654361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2023]
Abstract
Purpose Cone-beam CT (CBCT) is widespread in abdominal interventional imaging, but its long acquisition time makes it susceptible to patient motion. Image-based autofocus has shown success in CBCT deformable motion compensation, via deep autofocus metrics and multi-region optimization, but it is challenged by the large parameter dimensionality required to capture intricate motion trajectories. This work leverages the differentiable nature of deep autofocus metrics to build a novel optimization strategy, Multi-Stage Adaptive Spine Autofocus (MASA), for compensation of complex deformable motion in abdominal CBCT. Methods MASA poses the autofocus problem as a multi-stage adaptive sampling strategy of the motion trajectory, sampled with Hermite spline basis with variable amplitude and knot temporal positioning. The adaptive method permits simultaneous optimization of the sampling phase, local temporal sampling density, and time-dependent amplitude of the motion trajectory. The optimization is performed in a multi-stage schedule with increasing number of knots that progressively accommodates complex trajectories in late stages, preconditioned by coarser components from early stages, and with minimal increase in dimensionality. MASA was evaluated in controlled simulation experiments with two types of motion trajectories: i) combinations of slow drifts with sudden jerk (sigmoid) motion; and ii) combinations of periodic motion sources of varying frequency into multi-frequency trajectories. Further validation was obtained in clinical data from liver CBCT featuring motion of contrast-enhanced vessels, and soft-tissue structures. Results The adaptive sampling strategy provided successful motion compensation in sigmoid trajectories, compared to fixed sampling strategies (mean SSIM increase of 0.026 compared to 0.011). Inspection of the estimated motion showed the capability of MASA to automatically allocate larger sampling density to parts of the scan timeline featuring sudden motion, effectively accommodating complex motion without increasing the problem dimension. Experiments on multi-frequency trajectories with 3-stage MASA (5, 10, and 15 knots) yielded a twofold SSIM increase compared to single-stage autofocus with 15 knots (0.076 vs 0.040, respectively). Application of MASA to clinical datasets resulted in simultaneous improvement on the delineation of both contrast-enhanced vessels and soft-tissue structures in the liver. Conclusion A new autofocus framework, MASA, was developed including a novel multi-stage technique for adaptive temporal sampling of the motion trajectory in combination with fully differentiable deep autofocus metrics. This novel adaptive sampling approach is a crucial step for application of deformable motion compensation to complex temporal motion trajectories.
Collapse
Affiliation(s)
- H Huang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
- Russell H. Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD USA
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston TX USA
| | - A Lu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
| | - Y Hu
- Department of Computer Science, Johns Hopkins University, Baltimore, MD USA
| | - W Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
| | - M Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, MD USA
| | - C R Weiss
- Russell H. Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD USA
| | - A Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
| |
Collapse
|
8
|
Ibad HA, de Cesar Netto C, Shakoor D, Sisniega A, Liu S, Siewerdsen JH, Carrino JA, Zbijewski W, Demehri S. Computed Tomography: State-of-the-Art Advancements in Musculoskeletal Imaging. Invest Radiol 2023; 58:99-110. [PMID: 35976763 PMCID: PMC9742155 DOI: 10.1097/rli.0000000000000908] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
ABSTRACT Although musculoskeletal magnetic resonance imaging (MRI) plays a dominant role in characterizing abnormalities, novel computed tomography (CT) techniques have found an emerging niche in several scenarios such as trauma, gout, and the characterization of pathologic biomechanical states during motion and weight-bearing. Recent developments and advancements in the field of musculoskeletal CT include 4-dimensional, cone-beam (CB), and dual-energy (DE) CT. Four-dimensional CT has the potential to quantify biomechanical derangements of peripheral joints in different joint positions to diagnose and characterize patellofemoral instability, scapholunate ligamentous injuries, and syndesmotic injuries. Cone-beam CT provides an opportunity to image peripheral joints during weight-bearing, augmenting the diagnosis and characterization of disease processes. Emerging CBCT technologies improved spatial resolution for osseous microstructures in the quantitative analysis of osteoarthritis-related subchondral bone changes, trauma, and fracture healing. Dual-energy CT-based material decomposition visualizes and quantifies monosodium urate crystals in gout, bone marrow edema in traumatic and nontraumatic fractures, and neoplastic disease. Recently, DE techniques have been applied to CBCT, contributing to increased image quality in contrast-enhanced arthrography, bone densitometry, and bone marrow imaging. This review describes 4-dimensional CT, CBCT, and DECT advances, current logistical limitations, and prospects for each technique.
Collapse
Affiliation(s)
- Hamza Ahmed Ibad
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Cesar de Cesar Netto
- Department of Orthopaedics and Rehabilitation, Carver College of Medicine, University of Iowa, Iowa City, IA, USA
| | - Delaram Shakoor
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Alejandro Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Stephen Liu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Jeffrey H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - John A. Carrino
- Department of Radiology and Imaging, Hospital for Special Surgery, New York, NY, USA
| | - Wojciech Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Shadpour Demehri
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
9
|
Hu Y, Huang H, Siewerdsen JH, Zbijewski W, Unberath M, Weiss CR, Sisniega A. Simulation of Random Deformable Motion in Soft-Tissue Cone-Beam CT with Learned Models. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2022; 12304:1230413. [PMID: 36381251 PMCID: PMC9654724 DOI: 10.1117/12.2646720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Cone-beam CT (CBCT) is widely used for guidance in interventional radiology but it is susceptible to motion artifacts. Motion in interventional CBCT features a complex combination of diverse sources including quasi-periodic, consistent motion patterns such as respiratory motion, and aperiodic, quasi-random, motion such as peristalsis. Recent developments in image-based motion compensation methods include approaches that combine autofocus techniques with deep learning models for extraction of image features pertinent to CBCT motion. Training of such deep autofocus models requires the generation of large amounts of realistic, motion-corrupted CBCT. Previous works on motion simulation were mostly focused on quasi-periodic motion patterns, and reliable simulation of complex combined motion with quasi-random components remains an unaddressed challenge. This work presents a framework aimed at synthesis of realistic motion trajectories for simulation of deformable motion in soft-tissue CBCT. The approach leveraged the capability of conditional generative adversarial network (GAN) models to learn the complex underlying motion present in unlabeled, motion-corrupted, CBCT volumes. The approach is designed for training with unpaired clinical CBCT in an unsupervised fashion. This work presents a first feasibility study, in which the model was trained with simulated data featuring known motion, providing a controlled scenario for validation of the proposed approach prior to extension to clinical data. Our proof-of-concept study illustrated the potential of the model to generate realistic, variable simulation of CBCT deformable motion fields, consistent with three trends underlying the designed training data: i) the synthetic motion induced only diffeomorphic deformations - with Jacobian Determinant larger than zero; ii) the synthetic motion showed median displacement of 0. 5 mm in regions predominantly static in the training (e.g., the posterior aspect of the patient laying supine), compared to a median displacement of 3.8 mm in regions more prone to motion in the training; and iii) the synthetic motion exhibited predominant directionality consistent with the training set, resulting in larger motion in the superior-inferior direction (median and maximum amplitude of 4.58 mm and 20 mm, > 2x larger than the two remaining direction). Together, the proposed framework shows the feasibility for realistic motion simulation and synthesis of variable CBCT data.
Collapse
Affiliation(s)
- Y Hu
- Dept. of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - H Huang
- Dept. of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - J H Siewerdsen
- Dept. of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - W Zbijewski
- Dept. of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - M Unberath
- Dept. of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - C R Weiss
- Russel H. Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD, USA
| | - A Sisniega
- Dept. of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
10
|
Huang H, Siewerdsen JH, Zbijewski W, Weiss CR, Unberath M, Sisniega A. Context-Aware, Reference-Free Local Motion Metric for CBCT Deformable Motion Compensation. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2022; 12304:1230412. [PMID: 36381250 PMCID: PMC9665334 DOI: 10.1117/12.2646857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Deformable motion is one of the main challenges to image quality in interventional cone beam CT (CBCT). Autofocus methods have been successfully applied for deformable motion compensation in CBCT, using multi-region joint optimization approaches that leverage the moderately smooth spatial variation motion of the deformable motion field with a local neighborhood. However, conventional autofocus metrics enforce images featuring sharp image-appearance, but do not guarantee the preservation of anatomical structures. Our previous work (DL-VIF) showed that deep convolutional neural networks (CNNs) can reproduce metrics of structural similarity (visual information fidelity - VIF), removing the need for a matched motion-free reference, and providing quantification of motion degradation and structural integrity. Application of DL-VIF within local neighborhoods is challenged by the large variability of local image content across a CBCT volume, and requires global context information for successful evaluation of motion effects. In this work, we propose a novel deep autofocus metric, based on a context-aware, multi-resolution, deep CNN design. In addition to the inclusion of contextual information, the resulting metric generates a voxel-wise distribution of reference-free VIF values. The new metric, denoted CADL-VIF, was trained on simulated CBCT abdomen scans with deformable motion at random locations and with amplitude up to 30 mm. The CADL-VIF achieved good correlation with the ground truth VIF map across all test cases with R2 = 0.843 and slope = 0.941. When integrated into a multi-ROI deformable motion compensation method, CADL-VIF consistently reduced motion artifacts, yielding an average increase in SSIM of 0.129 in regions with severe motion and 0.113 in regions with mild motion. This work demonstrated the capability of CADL-VIF to recognize anatomical structures and penalize unrealistic images, which is a key step in developing reliable autofocus for complex deformable motion compensation in CBCT.
Collapse
Affiliation(s)
- H Huang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD
- Department of Radiology, Johns Hopkins University, Baltimore, MD
| | - W Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD
| | - C R Weiss
- Department of Radiology, Johns Hopkins University, Baltimore, MD
| | - M Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, MD
| | - A Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD
| |
Collapse
|