1
|
Huang H, Liu Y, Siewerdsen JH, Lu A, Hu Y, Zbijewski W, Unberath M, Weiss CR, Sisniega A. Deformable motion compensation in interventional cone-beam CT with a context-aware learned autofocus metric. Med Phys 2024; 51:4158-4180. [PMID: 38733602 DOI: 10.1002/mp.17125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2023] [Revised: 04/02/2024] [Accepted: 05/03/2024] [Indexed: 05/13/2024] Open
Abstract
PURPOSE Interventional Cone-Beam CT (CBCT) offers 3D visualization of soft-tissue and vascular anatomy, enabling 3D guidance of abdominal interventions. However, its long acquisition time makes CBCT susceptible to patient motion. Image-based autofocus offers a suitable platform for compensation of deformable motion in CBCT, but it relies on handcrafted motion metrics based on first-order image properties and that lack awareness of the underlying anatomy. This work proposes a data-driven approach to motion quantification via a learned, context-aware, deformable metric,VI F D L ${\bm{VI}}{{\bm{F}}}_{DL}$ , that quantifies the amount of motion degradation as well as the realism of the structural anatomical content in the image. METHODS The proposedVI F D L ${\bm{VI}}{{\bm{F}}}_{DL}$ was modeled as a deep convolutional neural network (CNN) trained to recreate a reference-based structural similarity metric-visual information fidelity (VIF). The deep CNN acted on motion-corrupted images, providing an estimation of the spatial VIF map that would be obtained against a motion-free reference, capturing motion distortion, and anatomic plausibility. The deep CNN featured a multi-branch architecture with a high-resolution branch for estimation of voxel-wise VIF on a small volume of interest. A second contextual, low-resolution branch provided features associated to anatomical context for disentanglement of motion effects and anatomical appearance. The deep CNN was trained on paired motion-free and motion-corrupted data obtained with a high-fidelity forward projection model for a protocol involving 120 kV and 9.90 mGy. The performance ofVI F D L ${\bm{VI}}{{\bm{F}}}_{DL}$ was evaluated via metrics of correlation with ground truth VIF ${\bm{VIF}}$ and with the underlying deformable motion field in simulated data with deformable motion fields with amplitude ranging from 5 to 20 mm and frequency from 2.4 up to 4 cycles/scan. Robustness to variation in tissue contrast and noise levels was assessed in simulation studies with varying beam energy (90-120 kV) and dose (1.19-39.59 mGy). Further validation was obtained on experimental studies with a deformable phantom. Final validation was obtained via integration ofVI F D L ${\bm{VI}}{{\bm{F}}}_{DL}$ on an autofocus compensation framework, applied to motion compensation on experimental datasets and evaluated via metric of spatial resolution on soft-tissue boundaries and sharpness of contrast-enhanced vascularity. RESULTS The magnitude and spatial map ofVI F D L ${\bm{VI}}{{\bm{F}}}_{DL}$ showed consistent and high correlation levels with the ground truth in both simulation and real data, yielding average normalized cross correlation (NCC) values of 0.95 and 0.88, respectively. Similarly,VI F D L ${\bm{VI}}{{\bm{F}}}_{DL}$ achieved good correlation values with the underlying motion field, with average NCC of 0.90. In experimental phantom studies,VI F D L ${\bm{VI}}{{\bm{F}}}_{DL}$ properly reflects the change in motion amplitudes and frequencies: voxel-wise averaging of the localVI F D L ${\bm{VI}}{{\bm{F}}}_{DL}$ across the full reconstructed volume yielded an average value of 0.69 for the case with mild motion (2 mm, 12 cycles/scan) and 0.29 for the case with severe motion (12 mm, 6 cycles/scan). Autofocus motion compensation usingVI F D L ${\bm{VI}}{{\bm{F}}}_{DL}$ resulted in noticeable mitigation of motion artifacts and improved spatial resolution of soft tissue and high-contrast structures, resulting in reduction of edge spread function width of 8.78% and 9.20%, respectively. Motion compensation also increased the conspicuity of contrast-enhanced vascularity, reflected in an increase of 9.64% in vessel sharpness. CONCLUSION The proposedVI F D L ${\bm{VI}}{{\bm{F}}}_{DL}$ , featuring a novel context-aware architecture, demonstrated its capacity as a reference-free surrogate of structural similarity to quantify motion-induced degradation of image quality and anatomical plausibility of image content. The validation studies showed robust performance across motion patterns, x-ray techniques, and anatomical instances. The proposed anatomy- and context-aware metric poses a powerful alternative to conventional motion estimation metrics, and a step forward for application of deep autofocus motion compensation for guidance in clinical interventional procedures.
Collapse
Affiliation(s)
- Heyuan Huang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Yixuan Liu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Jeffrey H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Alexander Lu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Yicheng Hu
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Wojciech Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Mathias Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Clifford R Weiss
- Department of Radiology, Johns Hopkins University, Baltimore, Maryland, USA
| | - Alejandro Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| |
Collapse
|
2
|
Bousse A, Kandarpa VSS, Rit S, Perelli A, Li M, Wang G, Zhou J, Wang G. Systematic Review on Learning-based Spectral CT. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2024; 8:113-137. [PMID: 38476981 PMCID: PMC10927029 DOI: 10.1109/trpms.2023.3314131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/14/2024]
Abstract
Spectral computed tomography (CT) has recently emerged as an advanced version of medical CT and significantly improves conventional (single-energy) CT. Spectral CT has two main forms: dual-energy computed tomography (DECT) and photon-counting computed tomography (PCCT), which offer image improvement, material decomposition, and feature quantification relative to conventional CT. However, the inherent challenges of spectral CT, evidenced by data and image artifacts, remain a bottleneck for clinical applications. To address these problems, machine learning techniques have been widely applied to spectral CT. In this review, we present the state-of-the-art data-driven techniques for spectral CT.
Collapse
Affiliation(s)
- Alexandre Bousse
- LaTIM, Inserm UMR 1101, Université de Bretagne Occidentale, 29238 Brest, France
| | | | - Simon Rit
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Saint Étienne, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69373, Lyon, France
| | - Alessandro Perelli
- Department of Biomedical Engineering, School of Science and Engineering, University of Dundee, DD1 4HN, UK
| | - Mengzhou Li
- Biomedical Imaging Center, Rensselaer Polytechnic Institute, Troy, New York, USA
| | - Guobao Wang
- Department of Radiology, University of California Davis Health, Sacramento, USA
| | - Jian Zhou
- CTIQ, Canon Medical Research USA, Inc., Vernon Hills, 60061, USA
| | - Ge Wang
- Biomedical Imaging Center, Rensselaer Polytechnic Institute, Troy, New York, USA
| |
Collapse
|
3
|
Li S, Zhou B. A review of radiomics and genomics applications in cancers: the way towards precision medicine. Radiat Oncol 2022; 17:217. [PMID: 36585716 PMCID: PMC9801589 DOI: 10.1186/s13014-022-02192-2] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Accepted: 12/27/2022] [Indexed: 01/01/2023] Open
Abstract
The application of radiogenomics in oncology has great prospects in precision medicine. Radiogenomics combines large volumes of radiomic features from medical digital images, genetic data from high-throughput sequencing, and clinical-epidemiological data into mathematical modelling. The amalgamation of radiomics and genomics provides an approach to better study the molecular mechanism of tumour pathogenesis, as well as new evidence-supporting strategies to identify the characteristics of cancer patients, make clinical decisions by predicting prognosis, and improve the development of individualized treatment guidance. In this review, we summarized recent research on radiogenomics applications in solid cancers and presented the challenges impeding the adoption of radiomics in clinical practice. More standard guidelines are required to normalize radiomics into reproducible and convincible analyses and develop it as a mature field.
Collapse
Affiliation(s)
- Simin Li
- grid.412636.40000 0004 1757 9485Department of Clinical Epidemiology and Center of Evidence-Based Medicine, The First Hospital of China Medical University, Shenyang, 110001 Liaoning People’s Republic of China
| | - Baosen Zhou
- grid.412636.40000 0004 1757 9485Department of Clinical Epidemiology and Center of Evidence-Based Medicine, The First Hospital of China Medical University, Shenyang, 110001 Liaoning People’s Republic of China
| |
Collapse
|
4
|
Huang H, Siewerdsen JH, Zbijewski W, Weiss CR, Unberath M, Ehtiati T, Sisniega A. Reference-free learning-based similarity metric for motion compensation in cone-beam CT. Phys Med Biol 2022; 67. [PMID: 35636391 DOI: 10.1088/1361-6560/ac749a] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Accepted: 05/30/2022] [Indexed: 11/12/2022]
Abstract
Purpose. Patient motion artifacts present a prevalent challenge to image quality in interventional cone-beam CT (CBCT). We propose a novel reference-free similarity metric (DL-VIF) that leverages the capability of deep convolutional neural networks (CNN) to learn features associated with motion artifacts within realistic anatomical features. DL-VIF aims to address shortcomings of conventional metrics of motion-induced image quality degradation that favor characteristics associated with motion-free images, such as sharpness or piecewise constancy, but lack any awareness of the underlying anatomy, potentially promoting images depicting unrealistic image content. DL-VIF was integrated in an autofocus motion compensation framework to test its performance for motion estimation in interventional CBCT.Methods. DL-VIF is a reference-free surrogate for the previously reported visual image fidelity (VIF) metric, computed against a motion-free reference, generated using a CNN trained using simulated motion-corrupted and motion-free CBCT data. Relatively shallow (2-ResBlock) and deep (3-Resblock) CNN architectures were trained and tested to assess sensitivity to motion artifacts and generalizability to unseen anatomy and motion patterns. DL-VIF was integrated into an autofocus framework for rigid motion compensation in head/brain CBCT and assessed in simulation and cadaver studies in comparison to a conventional gradient entropy metric.Results. The 2-ResBlock architecture better reflected motion severity and extrapolated to unseen data, whereas 3-ResBlock was found more susceptible to overfitting, limiting its generalizability to unseen scenarios. DL-VIF outperformed gradient entropy in simulation studies yielding average multi-resolution structural similarity index (SSIM) improvement over uncompensated image of 0.068 and 0.034, respectively, referenced to motion-free images. DL-VIF was also more robust in motion compensation, evidenced by reduced variance in SSIM for various motion patterns (σDL-VIF = 0.008 versusσgradient entropy = 0.019). Similarly, in cadaver studies, DL-VIF demonstrated superior motion compensation compared to gradient entropy (an average SSIM improvement of 0.043 (5%) versus little improvement and even degradation in SSIM, respectively) and visually improved image quality even in severely motion-corrupted images.Conclusion: The studies demonstrated the feasibility of building reference-free similarity metrics for quantification of motion-induced image quality degradation and distortion of anatomical structures in CBCT. DL-VIF provides a reliable surrogate for motion severity, penalizes unrealistic distortions, and presents a valuable new objective function for autofocus motion compensation in CBCT.
Collapse
Affiliation(s)
- H Huang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America.,Russell H. Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD, United States of America.,Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States of America
| | - W Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - C R Weiss
- Russell H. Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD, United States of America
| | - M Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States of America
| | - T Ehtiati
- Siemens Medical Solutions USA, Inc., Imaging & Therapy Systems, Hoffman Estates, IL, United States of America
| | - A Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| |
Collapse
|
5
|
Ko Y, Moon S, Baek J, Shim H. Rigid and non-rigid motion artifact reduction in X-ray CT using attention module. Med Image Anal 2020; 67:101883. [PMID: 33166775 DOI: 10.1016/j.media.2020.101883] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Revised: 10/13/2020] [Accepted: 10/14/2020] [Indexed: 12/16/2022]
Abstract
Motion artifacts are a major factor that can degrade the diagnostic performance of computed tomography (CT) images. In particular, the motion artifacts become considerably more severe when an imaging system requires a long scan time such as in dental CT or cone-beam CT (CBCT) applications, where patients generate rigid and non-rigid motions. To address this problem, we proposed a new real-time technique for motion artifacts reduction that utilizes a deep residual network with an attention module. Our attention module was designed to increase the model capacity by amplifying or attenuating the residual features according to their importance. We trained and evaluated the network by creating four benchmark datasets with rigid motions or with both rigid and non-rigid motions under a step-and-shoot fan-beam CT (FBCT) or a CBCT. Each dataset provided a set of motion-corrupted CT images and their ground-truth CT image pairs. The strong modeling power of the proposed network model allowed us to successfully handle motion artifacts from the two CT systems under various motion scenarios in real-time. As a result, the proposed model demonstrated clear performance benefits. In addition, we compared our model with Wasserstein generative adversarial network (WGAN)-based models and a deep residual network (DRN)-based model, which are one of the most powerful techniques for CT denoising and natural RGB image deblurring, respectively. Based on the extensive analysis and comparisons using four benchmark datasets, we confirmed that our model outperformed the aforementioned competitors. Our benchmark datasets and implementation code are available at https://github.com/youngjun-ko/ct_mar_attention.
Collapse
Affiliation(s)
- Youngjun Ko
- School of the Integrated Technology, Yonsei University, Songdogwahak-ro 85, Yeonsu-gu, Incheon, South Korea
| | - Seunghyuk Moon
- School of the Integrated Technology, Yonsei University, Songdogwahak-ro 85, Yeonsu-gu, Incheon, South Korea
| | - Jongduk Baek
- School of the Integrated Technology, Yonsei University, Songdogwahak-ro 85, Yeonsu-gu, Incheon, South Korea.
| | - Hyunjung Shim
- School of the Integrated Technology, Yonsei University, Songdogwahak-ro 85, Yeonsu-gu, Incheon, South Korea.
| |
Collapse
|