1
|
Lu A, Huang H, Hu Y, Zbijewski W, Unberath M, Siewerdsen JH, Weiss CR, Sisniega A. Vessel-targeted compensation of deformable motion in interventional cone-beam CT. Med Image Anal 2024; 97:103254. [PMID: 38968908 PMCID: PMC11365791 DOI: 10.1016/j.media.2024.103254] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 06/01/2024] [Accepted: 06/24/2024] [Indexed: 07/07/2024]
Abstract
The present standard of care for unresectable liver cancer is transarterial chemoembolization (TACE), which involves using chemotherapeutic particles to selectively embolize the arteries supplying hepatic tumors. Accurate volumetric identification of intricate fine vascularity is crucial for selective embolization. Three-dimensional imaging, particularly cone-beam CT (CBCT), aids in visualization and targeting of small vessels in such highly variable anatomy, but long image acquisition time results in intra-scan patient motion, which distorts vascular structures and tissue boundaries. To improve clarity of vascular anatomy and intra-procedural utility, this work proposes a targeted motion estimation and compensation framework that removes the need for any prior information or external tracking and for user interaction. Motion estimation is performed in two stages: (i) a target identification stage that segments arteries and catheters in the projection domain using a multi-view convolutional neural network to construct a coarse 3D vascular mask; and (ii) a targeted motion estimation stage that iteratively solves for the time-varying motion field via optimization of a vessel-enhancing objective function computed over the target vascular mask. The vessel-enhancing objective is derived through eigenvalues of the local image Hessian to emphasize bright tubular structures. Motion compensation is achieved via spatial transformer operators that apply time-dependent deformations to partial angle reconstructions, allowing efficient minimization via gradient backpropagation. The framework was trained and evaluated in anatomically realistic simulated motion-corrupted CBCTs mimicking TACE of hepatic tumors, at intermediate (3.0 mm) and large (6.0 mm) motion magnitudes. Motion compensation substantially improved median vascular DICE score (from 0.30 to 0.59 for large motion), image SSIM (from 0.77 to 0.93 for large motion), and vessel sharpness (0.189 mm-1 to 0.233 mm-1 for large motion) in simulated cases. Motion compensation also demonstrated increased vessel sharpness (0.188 mm-1 before to 0.205 mm-1 after) and reconstructed vessel length (median increased from 37.37 to 41.00 mm) on a clinical interventional CBCT. The proposed anatomy-aware motion compensation framework presented a promising approach for improving the utility of CBCT for intra-procedural vascular imaging, facilitating selective embolization procedures.
Collapse
Affiliation(s)
- Alexander Lu
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, Traylor Research Building, #622 720 Rutland Avenue Baltimore MD 21205, USA
| | - Heyuan Huang
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, Traylor Research Building, #622 720 Rutland Avenue Baltimore MD 21205, USA
| | - Yicheng Hu
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Wojciech Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, Traylor Research Building, #622 720 Rutland Avenue Baltimore MD 21205, USA
| | - Mathias Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Jeffrey H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, Traylor Research Building, #622 720 Rutland Avenue Baltimore MD 21205, USA; Departments of Imaging Physics, Radiation Physics, and Neurosurgery, The University of Texas M.D. Anderson Cancer Center, TX, USA
| | - Clifford R Weiss
- Department of Radiology, Johns Hopkins University, Baltimore, MD, USA
| | - Alejandro Sisniega
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, Traylor Research Building, #622 720 Rutland Avenue Baltimore MD 21205, USA.
| |
Collapse
|
2
|
Li X, Bellotti R, Bachtiary B, Hrbacek J, Weber DC, Lomax AJ, Buhmann JM, Zhang Y. A unified generation-registration framework for improved MR-based CT synthesis in proton therapy. Med Phys 2024. [PMID: 39137294 DOI: 10.1002/mp.17338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Revised: 06/11/2024] [Accepted: 07/06/2024] [Indexed: 08/15/2024] Open
Abstract
BACKGROUND The use of magnetic resonance (MR) imaging for proton therapy treatment planning is gaining attention as a highly effective method for guidance. At the core of this approach is the generation of computed tomography (CT) images from MR scans. However, the critical issue in this process is accurately aligning the MR and CT images, a task that becomes particularly challenging in frequently moving body areas, such as the head-and-neck. Misalignments in these images can result in blurred synthetic CT (sCT) images, adversely affecting the precision and effectiveness of the treatment planning. PURPOSE This study introduces a novel network that cohesively unifies image generation and registration processes to enhance the quality and anatomical fidelity of sCTs derived from better-aligned MR images. METHODS The approach synergizes a generation network (G) with a deformable registration network (R), optimizing them jointly in MR-to-CT synthesis. This goal is achieved by alternately minimizing the discrepancies between the generated/registered CT images and their corresponding reference CT counterparts. The generation network employs a UNet architecture, while the registration network leverages an implicit neural representation (INR) of the displacement vector fields (DVFs). We validated this method on a dataset comprising 60 head-and-neck patients, reserving 12 cases for holdout testing. RESULTS Compared to the baseline Pix2Pix method with MAE 124.95 ± $\pm$ 30.74 HU, the proposed technique demonstrated 80.98 ± $\pm$ 7.55 HU. The unified translation-registration network produced sharper and more anatomically congruent outputs, showing superior efficacy in converting MR images to sCTs. Additionally, from a dosimetric perspective, the plan recalculated on the resulting sCTs resulted in a remarkably reduced discrepancy to the reference proton plans. CONCLUSIONS This study conclusively demonstrates that a holistic MR-based CT synthesis approach, integrating both image-to-image translation and deformable registration, significantly improves the precision and quality of sCT generation, particularly for the challenging body area with varied anatomic changes between corresponding MR and CT.
Collapse
Affiliation(s)
- Xia Li
- Center for Proton Therapy, Paul Scherrer Institut, Villigen PSI, Switzerland
- Department of Computer Science, ETH Zürich, Zürich, Switzerland
| | - Renato Bellotti
- Center for Proton Therapy, Paul Scherrer Institut, Villigen PSI, Switzerland
- Department of Physics, ETH Zürich, Zürich, Switzerland
| | - Barbara Bachtiary
- Center for Proton Therapy, Paul Scherrer Institut, Villigen PSI, Switzerland
| | - Jan Hrbacek
- Center for Proton Therapy, Paul Scherrer Institut, Villigen PSI, Switzerland
| | - Damien C Weber
- Center for Proton Therapy, Paul Scherrer Institut, Villigen PSI, Switzerland
- Department of Radiation Oncology, University Hospital of Zürich, Zürich, Switzerland
- Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Antony J Lomax
- Center for Proton Therapy, Paul Scherrer Institut, Villigen PSI, Switzerland
- Department of Physics, ETH Zürich, Zürich, Switzerland
| | | | - Ye Zhang
- Center for Proton Therapy, Paul Scherrer Institut, Villigen PSI, Switzerland
| |
Collapse
|
3
|
Huang Y, Zhang X, Hu Y, Johnston AR, Jones CK, Zbijewski WB, Siewerdsen JH, Helm PA, Witham TF, Uneri A. Deformable registration of preoperative MR and intraoperative long-length tomosynthesis images for guidance of spine surgery via image synthesis. Comput Med Imaging Graph 2024; 114:102365. [PMID: 38471330 DOI: 10.1016/j.compmedimag.2024.102365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 01/31/2024] [Accepted: 02/22/2024] [Indexed: 03/14/2024]
Abstract
PURPOSE Improved integration and use of preoperative imaging during surgery hold significant potential for enhancing treatment planning and instrument guidance through surgical navigation. Despite its prevalent use in diagnostic settings, MR imaging is rarely used for navigation in spine surgery. This study aims to leverage MR imaging for intraoperative visualization of spine anatomy, particularly in cases where CT imaging is unavailable or when minimizing radiation exposure is essential, such as in pediatric surgery. METHODS This work presents a method for deformable 3D-2D registration of preoperative MR images with a novel intraoperative long-length tomosynthesis imaging modality (viz., Long-Film [LF]). A conditional generative adversarial network is used to translate MR images to an intermediate bone image suitable for registration, followed by a model-based 3D-2D registration algorithm to deformably map the synthesized images to LF images. The algorithm's performance was evaluated on cadaveric specimens with implanted markers and controlled deformation, and in clinical images of patients undergoing spine surgery as part of a large-scale clinical study on LF imaging. RESULTS The proposed method yielded a median 2D projection distance error of 2.0 mm (interquartile range [IQR]: 1.1-3.3 mm) and a 3D target registration error of 1.5 mm (IQR: 0.8-2.1 mm) in cadaver studies. Notably, the multi-scale approach exhibited significantly higher accuracy compared to rigid solutions and effectively managed the challenges posed by piecewise rigid spine deformation. The robustness and consistency of the method were evaluated on clinical images, yielding no outliers on vertebrae without surgical instrumentation and 3% outliers on vertebrae with instrumentation. CONCLUSIONS This work constitutes the first reported approach for deformable MR to LF registration based on deep image synthesis. The proposed framework provides access to the preoperative annotations and planning information during surgery and enables surgical navigation within the context of MR images and/or dual-plane LF images.
Collapse
Affiliation(s)
- Yixuan Huang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Xiaoxuan Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Yicheng Hu
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States
| | - Ashley R Johnston
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Craig K Jones
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States
| | - Wojciech B Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Jeffrey H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States; Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | | | - Timothy F Witham
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore, MD, United States
| | - Ali Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States.
| |
Collapse
|
4
|
Chang Q, Wang Y. Structure-aware independently trained multi-scale registration network for cardiac images. Med Biol Eng Comput 2024; 62:1795-1808. [PMID: 38381202 DOI: 10.1007/s11517-024-03039-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 01/31/2024] [Indexed: 02/22/2024]
Abstract
Image registration is a primary task in various medical image analysis applications. However, cardiac image registration is difficult due to the large non-rigid deformation of the heart and the complex anatomical structure. This paper proposes a structure-aware independently trained multi-scale registration network (SIMReg) to address this challenge. Using image pairs of different resolutions, independently train each registration network to extract image features of large deformation image pairs at different resolutions. In the testing stage, the large deformation registration is decomposed into a multi-scale registration process, and the deformation fields of different resolutions are fused by a step-by-step deformation method, thus solving the difficulty of directly processing large deformation. Meanwhile, the targeted introduction of MIND (modality independent neighborhood descriptor) structural features to guide network training enhances the registration of cardiac structural contours and improves the registration effect of local details. Experiments were carried out on the open cardiac dataset ACDC (automated cardiac diagnosis challenge), and the average Dice value of the experimental results of the proposed method was 0.833. Comparative experiments showed that the proposed SIMReg could better solve the problem of heart image registration and achieve a better registration effect on cardiac images.
Collapse
Affiliation(s)
- Qing Chang
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai, China
| | - Yaqi Wang
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai, China.
| |
Collapse
|
5
|
Deng L, Zou Y, Yang X, Wang J, Huang S. L2NLF: a novel linear-to-nonlinear framework for multi-modal medical image registration. Biomed Eng Lett 2024; 14:497-509. [PMID: 38645595 PMCID: PMC11026354 DOI: 10.1007/s13534-023-00344-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2023] [Revised: 10/29/2023] [Accepted: 12/11/2023] [Indexed: 04/23/2024] Open
Abstract
In recent years, deep learning has ushered in significant development in medical image registration, and the method of non-rigid registration using deep neural networks to generate a deformation field has higher accuracy. However, unlike monomodal medical image registration, multimodal medical image registration is a more complex and challenging task. This paper proposes a new linear-to-nonlinear framework (L2NLF) for multimodal medical image registration. The first linear stage is essentially image conversion, which can reduce the difference between two images without changing the authenticity of medical images, thus transforming multimodal registration into monomodal registration. The second nonlinear stage is essentially unsupervised deformable registration based on the deep neural network. In this paper, a brand-new registration network, CrossMorph, is designed, a deep neural network similar to the U-net structure. As the backbone of the encoder, the volume CrossFormer block can better extract local and global information. Booster promotes the reduction of more deep features and shallow features. The qualitative and quantitative experimental results on T1 and T2 data of 240 patients' brains show that L2NLF can achieve excellent registration effect in the image conversion part with very low computation, and it will not change the authenticity of the converted image at all. Compared with the current state-of-the-art registration method, CrossMorph can effectively reduce average surface distance, improve dice score, and improve the deformation field's smoothness. The proposed methods have potential value in clinical application.
Collapse
Affiliation(s)
- Liwei Deng
- Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, School of Automation, Harbin University of Science and Technology, Harbin, 150080 China
| | - Yanchao Zou
- Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, School of Automation, Harbin University of Science and Technology, Harbin, 150080 China
| | - Xin Yang
- Department of Radiation Oncology, Sun Yat-Sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, 510060 Guangdong China
| | - Jing Wang
- Institute for Brain Research and Rehabilitation, South China Normal University, Zhongshan Avenue, Guangzhou, 510631 China
| | - Sijuan Huang
- Department of Radiation Oncology, Sun Yat-Sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, 510060 Guangdong China
| |
Collapse
|
6
|
Xie K, Gao L, Zhang H, Zhang S, Xi Q, Zhang F, Sun J, Lin T, Sui J, Ni X. GAN-based metal artifacts region inpainting in brain MRI imaging with reflective registration. Med Phys 2024; 51:2066-2080. [PMID: 37665773 DOI: 10.1002/mp.16724] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 08/17/2023] [Accepted: 08/19/2023] [Indexed: 09/06/2023] Open
Abstract
BACKGROUND AND OBJECTIVE Metallic magnetic resonance imaging (MRI) implants can introduce magnetic field distortions, resulting in image distortion, such as bulk shifts and signal-loss artifacts. Metal Artifacts Region Inpainting Network (MARINet), using the symmetry of brain MRI images, has been developed to generate normal MRI images in the image domain and improve image quality. METHODS T1-weighted MRI images containing or located near the teeth of 100 patients were collected. A total of 9000 slices were obtained after data augmentation. Then, MARINet based on U-Net with a dual-path encoder was employed to inpaint the artifacts in MRI images. The input of MARINet contains the original image and the flipped registered image, with partial convolution used concurrently. Subsequently, we compared PConv with partial convolution, and GConv with gated convolution, SDEdit using a diffusion model for inpainting the artifact region of MRI images. The mean absolute error (MAE) and peak signal-to-noise ratio (PSNR) for the mask were used to compare the results of these methods. In addition, the artifact masks of clinical MRI images were inpainted by physicians. RESULTS MARINet could directly and effectively inpaint the incomplete MRI images generated by masks in the image domain. For the test results of PConv, GConv, SDEdit, and MARINet, the masked MAEs were 0.1938, 0.1904, 0.1876, and 0.1834, respectively, and the masked PSNRs were 17.39, 17.40, 17.49, and 17.60 dB, respectively. The visualization results also suggest that the network can recover the tissue texture, alveolar shape, and tooth contour. Additionally, for clinical artifact MRI images, MARINet completed the artifact region inpainting task more effectively when compared with other models. CONCLUSIONS By leveraging the quasi-symmetry of brain MRI images, MARINet can directly and effectively inpaint the metal artifacts in MRI images in the image domain, restoring the tooth contour and detail, thereby enhancing the image quality.
Collapse
Affiliation(s)
- Kai Xie
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
| | - Liugang Gao
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
| | - Heng Zhang
- Center for Medical Physics, Nanjing Medical University, Changzhou, China
- Changzhou Key Laboratory of Medical Physics, Changzhou, China
| | - Sai Zhang
- Center for Medical Physics, Nanjing Medical University, Changzhou, China
- Changzhou Key Laboratory of Medical Physics, Changzhou, China
| | - Qianyi Xi
- Center for Medical Physics, Nanjing Medical University, Changzhou, China
- Changzhou Key Laboratory of Medical Physics, Changzhou, China
| | - Fan Zhang
- Center for Medical Physics, Nanjing Medical University, Changzhou, China
- Changzhou Key Laboratory of Medical Physics, Changzhou, China
| | - Jiawei Sun
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
| | - Tao Lin
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
| | - Jianfeng Sui
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
| | - Xinye Ni
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
- Center for Medical Physics, Nanjing Medical University, Changzhou, China
- Changzhou Key Laboratory of Medical Physics, Changzhou, China
| |
Collapse
|
7
|
Tian D, Sun G, Zheng H, Yu S, Jiang J. CT-CBCT deformable registration using weakly-supervised artifact-suppression transfer learning network. Phys Med Biol 2023; 68:165011. [PMID: 37433303 DOI: 10.1088/1361-6560/ace675] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Accepted: 07/11/2023] [Indexed: 07/13/2023]
Abstract
Objective.Computed tomography-cone-beam computed tomography (CT-CBCT) deformable registration has great potential in adaptive radiotherapy. It plays an important role in tumor tracking, secondary planning, accurate irradiation, and the protection of at-risk organs. Neural networks have been improving CT-CBCT deformable registration, and almost all registration algorithms based on neural networks rely on the gray values of both CT and CBCT. The gray value is a key factor in the loss function, parameter training, and final efficacy of the registration. Unfortunately, the scattering artifacts in CBCT affect the gray values of different pixels inconsistently. Therefore, the direct registration of the original CT-CBCT introduces artifact superposition loss.Approach. In this study, a histogram analysis method for the gray values was used. Based on an analysis of the gray value distribution characteristics of different regions in CT and CBCT, the degree of superposition of the artifact in the region of disinterest was found to be much higher than that in the region of interest. Moreover, the former was the main reason for artifact superposition loss. Consequently, a new weakly supervised two-stage transfer-learning network based on artifact suppression was proposed. The first stage was a pre-training network designed to suppress artifacts contained in the region of disinterest. The second stage was a convolutional neural network that registered the suppressed CBCT and CT.Main Results. Through a comparative test of the thoracic CT-CBCT deformable registration, whose data were collected from the Elekta XVI system, the rationality and accuracy after artifact suppression were confirmed to be significantly improved compared with the other algorithms without artifact suppression.Significance. This study proposed and verified a new deformable registration method with multi-stage neural networks, which can effectively suppress artifacts and further improve registration by incorporating a pre-training technique and an attention mechanism.
Collapse
Affiliation(s)
- Dingshu Tian
- University of Science and Technology of China, Hefei 230026, People's Republic of China
- Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, People's Republic of China
| | - Guangyao Sun
- SuperSafety Science and Technology Co., Ltd, Hefei 230088, People's Republic of China
- International Academy of Neutron Science, Qingdao 266199, People's Republic of China
| | - Huaqing Zheng
- International Academy of Neutron Science, Qingdao 266199, People's Republic of China
- Super Accuracy Science and Technology Co., Ltd, Nanjing 210044, People's Republic of China
| | - Shengpeng Yu
- SuperSafety Science and Technology Co., Ltd, Hefei 230088, People's Republic of China
- International Academy of Neutron Science, Qingdao 266199, People's Republic of China
| | - Jieqiong Jiang
- Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, People's Republic of China
| |
Collapse
|