1
|
Zhang W, Zhao L, Gou H, Gong Y, Zhou Y, Feng Q. PRSCS-Net: Progressive 3D/2D rigid Registration network with the guidance of Single-view Cycle Synthesis. Med Image Anal 2024; 97:103283. [PMID: 39094463 DOI: 10.1016/j.media.2024.103283] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Revised: 07/08/2024] [Accepted: 07/17/2024] [Indexed: 08/04/2024]
Abstract
The 3D/2D registration for 3D pre-operative images (computed tomography, CT) and 2D intra-operative images (X-ray) plays an important role in image-guided spine surgeries. Conventional iterative-based approaches suffer from time-consuming processes. Existing learning-based approaches require high computational costs and face poor performance on large misalignment because of projection-induced losses or ill-posed reconstruction. In this paper, we propose a Progressive 3D/2D rigid Registration network with the guidance of Single-view Cycle Synthesis, named PRSCS-Net. Specifically, we first introduce the differentiable backward/forward projection operator into the single-view cycle synthesis network, which reconstructs corresponding 3D geometry features from two 2D intra-operative view images (one from the input, and the other from the synthesis). In this way, the problem of limited views during reconstruction can be solved. Subsequently, we employ a self-reconstruction path to extract latent representation from pre-operative 3D CT images. The following pose estimation process will be performed in the 3D geometry feature space, which can solve the dimensional gap, greatly reduce the computational complexity, and ensure that the features extracted from pre-operative and intra-operative images are as relevant as possible to pose estimation. Furthermore, to enhance the ability of our model for handling large misalignment, we develop a progressive registration path, including two sub-registration networks, aiming to estimate the pose parameters via two-step warping volume features. Finally, our proposed method has been evaluated on a public dataset CTSpine1k and an in-house dataset C-ArmLSpine for 3D/2D registration. Results demonstrate that PRSCS-Net achieves state-of-the-art registration performance in terms of registration accuracy, robustness, and generalizability compared with existing methods. Thus, PRSCS-Net has potential for clinical spinal disease surgical planning and surgical navigation systems.
Collapse
Affiliation(s)
- Wencong Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510515, China
| | - Lei Zhao
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510515, China
| | - Hang Gou
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510515, China
| | - Yanggang Gong
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510515, China
| | - Yujia Zhou
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510515, China.
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510515, China.
| |
Collapse
|
2
|
Lin J, Tao H, Yuan X, Yang J. ASO Author Reflections: Radical Resection After Neoadjuvant Therapy for Intrahepatic Cholangiocarcinoma-Emerging Technologies in Comprehensive Treatment Strategies. Ann Surg Oncol 2024:10.1245/s10434-024-15896-4. [PMID: 39048906 DOI: 10.1245/s10434-024-15896-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2024] [Accepted: 07/10/2024] [Indexed: 07/27/2024]
Affiliation(s)
- Jinyu Lin
- The Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China
| | - Haisu Tao
- The Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China
| | - Xiangdong Yuan
- Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China.
| | - Jian Yang
- The Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China.
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China.
| |
Collapse
|
3
|
Zhou X, Liu Y, Wei C, Xu Q. Reference-free calibration method for asynchronous rotation in robotic CT. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024:XST240023. [PMID: 38995760 DOI: 10.3233/xst-240023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/14/2024]
Abstract
BACKGROUND Geometry calibration for robotic CT system is necessary for obtaining acceptable images under the asynchrony of two manipulators. OBJECTIVE We aim to evaluate the impact of different types of asynchrony on images and propose a reference-free calibration method based on a simplified geometry model. METHODS We evaluate the impact of different types of asynchrony on images and propose a novel calibration method focused on asynchronous rotation of robotic CT. The proposed method is initialized with reconstructions under default uncalibrated geometry and uses grid sampling of estimated geometry to determine the direction of optimization. Difference between the re-projections of sampling points and the original projection is used to guide the optimization direction. Images and estimated geometry are optimized alternatively in an iteration, and it stops when the difference of residual projections is close enough, or when the maximum iteration number is reached. RESULTS In our simulation experiments, proposed method shows better performance, with the PSNR increasing by 2%, and the SSIM increasing by 13.6% after calibration. The experiments reveal fewer artifacts and higher image quality. CONCLUSION We find that asynchronous rotation has a more significant impact on reconstruction, and the proposed method offers a feasible solution for correcting asynchronous rotation.
Collapse
Affiliation(s)
- Xuan Zhou
- Beijing Engineering Research Center of Radiographic Techniques and Equipment, Institute of High Energy Physics, Chinese Academy of Science, Beijing, China
- School of Nuclear Science and Technology, University of Chinese Academy of Sciences, Beijing, China
| | - Yuedong Liu
- Beijing Engineering Research Center of Radiographic Techniques and Equipment, Institute of High Energy Physics, Chinese Academy of Science, Beijing, China
- School of Nuclear Science and Technology, University of Chinese Academy of Sciences, Beijing, China
| | - Cunfeng Wei
- Beijing Engineering Research Center of Radiographic Techniques and Equipment, Institute of High Energy Physics, Chinese Academy of Science, Beijing, China
- School of Nuclear Science and Technology, University of Chinese Academy of Sciences, Beijing, China
- Jinan Laboratory of Applied Nuclear Science, Jinan, China
| | - Qiong Xu
- Beijing Engineering Research Center of Radiographic Techniques and Equipment, Institute of High Energy Physics, Chinese Academy of Science, Beijing, China
- Jinan Laboratory of Applied Nuclear Science, Jinan, China
| |
Collapse
|
4
|
Hoffmann M, Hoopes A, Greve DN, Fischl B, Dalca AV. Anatomy-aware and acquisition-agnostic joint registration with SynthMorph. IMAGING NEUROSCIENCE (CAMBRIDGE, MASS.) 2024; 2:1-33. [PMID: 39015335 PMCID: PMC11247402 DOI: 10.1162/imag_a_00197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 04/27/2024] [Accepted: 05/21/2024] [Indexed: 07/18/2024]
Abstract
Affine image registration is a cornerstone of medical-image analysis. While classical algorithms can achieve excellent accuracy, they solve a time-consuming optimization for every image pair. Deep-learning (DL) methods learn a function that maps an image pair to an output transform. Evaluating the function is fast, but capturing large transforms can be challenging, and networks tend to struggle if a test-image characteristic shifts from the training domain, such as the resolution. Most affine methods are agnostic to the anatomy the user wishes to align, meaning the registration will be inaccurate if algorithms consider all structures in the image. We address these shortcomings with SynthMorph, a fast, symmetric, diffeomorphic, and easy-to-use DL tool for joint affine-deformable registration of any brain image without preprocessing. First, we leverage a strategy that trains networks with widely varying images synthesized from label maps, yielding robust performance across acquisition specifics unseen at training. Second, we optimize the spatial overlap of select anatomical labels. This enables networks to distinguish anatomy of interest from irrelevant structures, removing the need for preprocessing that excludes content which would impinge on anatomy-specific registration. Third, we combine the affine model with a deformable hypernetwork that lets users choose the optimal deformation-field regularity for their specific data, at registration time, in a fraction of the time required by classical methods. This framework is applicable to learning anatomy-aware, acquisition-agnostic registration of any anatomy with any architecture, as long as label maps are available for training. We analyze how competing architectures learn affine transforms and compare state-of-the-art registration tools across an extremely diverse set of neuroimaging data, aiming to truly capture the behavior of methods in the real world. SynthMorph demonstrates high accuracy and is available at https://w3id.org/synthmorph, as a single complete end-to-end solution for registration of brain magnetic resonance imaging (MRI) data.
Collapse
Affiliation(s)
- Malte Hoffmann
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, United States
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
- Department of Radiology, Harvard Medical School, Boston, MA, United States
| | - Andrew Hoopes
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, United States
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
- Computer Science & Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Douglas N. Greve
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, United States
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
- Department of Radiology, Harvard Medical School, Boston, MA, United States
| | - Bruce Fischl
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, United States
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
- Department of Radiology, Harvard Medical School, Boston, MA, United States
- Computer Science & Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Adrian V. Dalca
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, United States
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
- Department of Radiology, Harvard Medical School, Boston, MA, United States
- Computer Science & Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, United States
| |
Collapse
|
5
|
Huang Y, Zhang X, Hu Y, Johnston AR, Jones CK, Zbijewski WB, Siewerdsen JH, Helm PA, Witham TF, Uneri A. Deformable registration of preoperative MR and intraoperative long-length tomosynthesis images for guidance of spine surgery via image synthesis. Comput Med Imaging Graph 2024; 114:102365. [PMID: 38471330 DOI: 10.1016/j.compmedimag.2024.102365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 01/31/2024] [Accepted: 02/22/2024] [Indexed: 03/14/2024]
Abstract
PURPOSE Improved integration and use of preoperative imaging during surgery hold significant potential for enhancing treatment planning and instrument guidance through surgical navigation. Despite its prevalent use in diagnostic settings, MR imaging is rarely used for navigation in spine surgery. This study aims to leverage MR imaging for intraoperative visualization of spine anatomy, particularly in cases where CT imaging is unavailable or when minimizing radiation exposure is essential, such as in pediatric surgery. METHODS This work presents a method for deformable 3D-2D registration of preoperative MR images with a novel intraoperative long-length tomosynthesis imaging modality (viz., Long-Film [LF]). A conditional generative adversarial network is used to translate MR images to an intermediate bone image suitable for registration, followed by a model-based 3D-2D registration algorithm to deformably map the synthesized images to LF images. The algorithm's performance was evaluated on cadaveric specimens with implanted markers and controlled deformation, and in clinical images of patients undergoing spine surgery as part of a large-scale clinical study on LF imaging. RESULTS The proposed method yielded a median 2D projection distance error of 2.0 mm (interquartile range [IQR]: 1.1-3.3 mm) and a 3D target registration error of 1.5 mm (IQR: 0.8-2.1 mm) in cadaver studies. Notably, the multi-scale approach exhibited significantly higher accuracy compared to rigid solutions and effectively managed the challenges posed by piecewise rigid spine deformation. The robustness and consistency of the method were evaluated on clinical images, yielding no outliers on vertebrae without surgical instrumentation and 3% outliers on vertebrae with instrumentation. CONCLUSIONS This work constitutes the first reported approach for deformable MR to LF registration based on deep image synthesis. The proposed framework provides access to the preoperative annotations and planning information during surgery and enables surgical navigation within the context of MR images and/or dual-plane LF images.
Collapse
Affiliation(s)
- Yixuan Huang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Xiaoxuan Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Yicheng Hu
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States
| | - Ashley R Johnston
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Craig K Jones
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States
| | - Wojciech B Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Jeffrey H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States; Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | | | - Timothy F Witham
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore, MD, United States
| | - Ali Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States.
| |
Collapse
|
6
|
Gao X, Zhong W, Wang R, Heimann AF, Tannast M, Zheng G. MAIRNet: weakly supervised anatomy-aware multimodal articulated image registration network. Int J Comput Assist Radiol Surg 2024; 19:507-517. [PMID: 38236477 DOI: 10.1007/s11548-023-03056-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2023] [Accepted: 12/21/2023] [Indexed: 01/19/2024]
Abstract
PURPOSE Multimodal articulated image registration (MAIR) is a challenging problem because the resulting transformation needs to maintain rigidity for bony structures while allowing elastic deformation for surrounding soft tissues. Existing deep learning-based methods ignore the articulated structures and consider it as a pure deformable registration problem, leading to suboptimal results. METHODS We propose a novel weakly supervised anatomy-aware multimodal articulated image registration network, referred as MAIRNet, to solve the challenging problem. The architecture of MAIRNet comprises of two branches: a non-learnable polyrigid registration branch to estimate an initial velocity field, and a learnable deformable registration branch to learn an increment. These two branches work together to produce a velocity field that can be integrated to generate the final displacement field. RESULTS We designed and conducted comprehensive experiments on three datasets to evaluate the performance of the proposed method. Specifically, on the hip dataset, our method achieved, respectively, an average dice of 90.8%, 92.4% and 91.3% for the pelvis, the right femur, and the left femur. On the lumbar spinal dataset, our method obtained, respectively, an average dice of 86.1% and 85.9% for the L4 and the L5 vertebrae. On the thoracic spinal dataset, our method achieved, respectively, an average dice of 76.7%, 79.5%, 82.9%, 85.5% and 85.7% for the five thoracic vertebrae ranging from T6 to T10. CONCLUSION In summary, we developed a novel approach for multimodal articulated image registration. Comprehensive experiments conducted on three typical yet challenging datasets demonstrated the efficacy of the present approach. Our method achieved better results than the state-of-the-art approaches.
Collapse
Affiliation(s)
- Xiaoru Gao
- Institute of Medical Robotics, Shanghai Jiao Tong University, Dongchuan Road, Shanghai, 200240, China
| | - Woquan Zhong
- The Third Hospital, Peking University, Beijing, 100191, China
| | - Runze Wang
- Institute of Medical Robotics, Shanghai Jiao Tong University, Dongchuan Road, Shanghai, 200240, China
| | - Alexander F Heimann
- Department of Orthopaedic Surgery, HFR Cantonal Hospital, University of Fribourg, Fribourg, Switzerland
| | - Moritz Tannast
- Department of Orthopaedic Surgery, HFR Cantonal Hospital, University of Fribourg, Fribourg, Switzerland
| | - Guoyan Zheng
- Institute of Medical Robotics, Shanghai Jiao Tong University, Dongchuan Road, Shanghai, 200240, China.
| |
Collapse
|