1
|
Zhang W, Zhao L, Gou H, Gong Y, Zhou Y, Feng Q. PRSCS-Net: Progressive 3D/2D rigid Registration network with the guidance of Single-view Cycle Synthesis. Med Image Anal 2024; 97:103283. [PMID: 39094463 DOI: 10.1016/j.media.2024.103283] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Revised: 07/08/2024] [Accepted: 07/17/2024] [Indexed: 08/04/2024]
Abstract
The 3D/2D registration for 3D pre-operative images (computed tomography, CT) and 2D intra-operative images (X-ray) plays an important role in image-guided spine surgeries. Conventional iterative-based approaches suffer from time-consuming processes. Existing learning-based approaches require high computational costs and face poor performance on large misalignment because of projection-induced losses or ill-posed reconstruction. In this paper, we propose a Progressive 3D/2D rigid Registration network with the guidance of Single-view Cycle Synthesis, named PRSCS-Net. Specifically, we first introduce the differentiable backward/forward projection operator into the single-view cycle synthesis network, which reconstructs corresponding 3D geometry features from two 2D intra-operative view images (one from the input, and the other from the synthesis). In this way, the problem of limited views during reconstruction can be solved. Subsequently, we employ a self-reconstruction path to extract latent representation from pre-operative 3D CT images. The following pose estimation process will be performed in the 3D geometry feature space, which can solve the dimensional gap, greatly reduce the computational complexity, and ensure that the features extracted from pre-operative and intra-operative images are as relevant as possible to pose estimation. Furthermore, to enhance the ability of our model for handling large misalignment, we develop a progressive registration path, including two sub-registration networks, aiming to estimate the pose parameters via two-step warping volume features. Finally, our proposed method has been evaluated on a public dataset CTSpine1k and an in-house dataset C-ArmLSpine for 3D/2D registration. Results demonstrate that PRSCS-Net achieves state-of-the-art registration performance in terms of registration accuracy, robustness, and generalizability compared with existing methods. Thus, PRSCS-Net has potential for clinical spinal disease surgical planning and surgical navigation systems.
Collapse
Affiliation(s)
- Wencong Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510515, China
| | - Lei Zhao
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510515, China
| | - Hang Gou
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510515, China
| | - Yanggang Gong
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510515, China
| | - Yujia Zhou
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510515, China.
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510515, China.
| |
Collapse
|
2
|
Sun X, Li Y, Li Y, Wang S, Qin Y, Chen P. Reconstruction method suitable for fast CT imaging. OPTICS EXPRESS 2024; 32:17072-17087. [PMID: 38858899 DOI: 10.1364/oe.522097] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Accepted: 04/09/2024] [Indexed: 06/12/2024]
Abstract
Reconstructing computed tomography (CT) images from an extremely limited set of projections is crucial in practical applications. As the available projections significantly decrease, traditional reconstruction and model-based iterative reconstruction methods become constrained. This work aims to seek a reconstruction method applicable to fast CT imaging when available projections are highly sparse. To minimize the time and cost associated with projections acquisition, we propose a deep learning model, X-CTReNet, which parameterizes a nonlinear mapping function from orthogonal projections to CT volumes for 3D reconstruction. The proposed model demonstrates effective capability in inferring CT volumes from two-view projections compared to baseline methods, highlighting the significant potential for drastically reducing projection acquisition in fast CT imaging.
Collapse
|
3
|
Zhu M, Fu Q, Liu B, Zhang M, Li B, Luo X, Zhou F. RT-SRTS: Angle-agnostic real-time simultaneous 3D reconstruction and tumor segmentation from single X-ray projection. Comput Biol Med 2024; 173:108390. [PMID: 38569234 DOI: 10.1016/j.compbiomed.2024.108390] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Revised: 03/24/2024] [Accepted: 03/26/2024] [Indexed: 04/05/2024]
Abstract
Radiotherapy is one of the primary treatment methods for tumors, but the organ movement caused by respiration limits its accuracy. Recently, 3D imaging from a single X-ray projection has received extensive attention as a promising approach to address this issue. However, current methods can only reconstruct 3D images without directly locating the tumor and are only validated for fixed-angle imaging, which fails to fully meet the requirements of motion control in radiotherapy. In this study, a novel imaging method RT-SRTS is proposed which integrates 3D imaging and tumor segmentation into one network based on multi-task learning (MTL) and achieves real-time simultaneous 3D reconstruction and tumor segmentation from a single X-ray projection at any angle. Furthermore, the attention enhanced calibrator (AEC) and uncertain-region elaboration (URE) modules have been proposed to aid feature extraction and improve segmentation accuracy. The proposed method was evaluated on fifteen patient cases and compared with three state-of-the-art methods. It not only delivers superior 3D reconstruction but also demonstrates commendable tumor segmentation results. Simultaneous reconstruction and segmentation can be completed in approximately 70 ms, significantly faster than the required time threshold for real-time tumor tracking. The efficacies of both AEC and URE have also been validated in ablation studies. The code of work is available at https://github.com/ZywooSimple/RT-SRTS.
Collapse
Affiliation(s)
- Miao Zhu
- Image Processing Center, Beihang University, Beijing, 100191, PR China
| | - Qiming Fu
- Image Processing Center, Beihang University, Beijing, 100191, PR China
| | - Bo Liu
- Image Processing Center, Beihang University, Beijing, 100191, PR China.
| | - Mengxi Zhang
- Image Processing Center, Beihang University, Beijing, 100191, PR China
| | - Bojian Li
- Image Processing Center, Beihang University, Beijing, 100191, PR China
| | - Xiaoyan Luo
- Image Processing Center, Beihang University, Beijing, 100191, PR China.
| | - Fugen Zhou
- Image Processing Center, Beihang University, Beijing, 100191, PR China
| |
Collapse
|
4
|
Zhang C, Liu L, Dai J, Liu X, He W, Chan Y, Xie Y, Chi F, Liang X. XTransCT: ultra-fast volumetric CT reconstruction using two orthogonal x-ray projections for image-guided radiation therapy via a transformer network. Phys Med Biol 2024; 69:085010. [PMID: 38471171 DOI: 10.1088/1361-6560/ad3320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Accepted: 03/12/2024] [Indexed: 03/14/2024]
Abstract
Objective.The aim of this study was to reconstruct volumetric computed tomography (CT) images in real-time from ultra-sparse two-dimensional x-ray projections, facilitating easier navigation and positioning during image-guided radiation therapy.Approach.Our approach leverages a voxel-sapce-searching Transformer model to overcome the limitations of conventional CT reconstruction techniques, which require extensive x-ray projections and lead to high radiation doses and equipment constraints.Main results.The proposed XTransCT algorithm demonstrated superior performance in terms of image quality, structural accuracy, and generalizability across different datasets, including a hospital set of 50 patients, the large-scale public LIDC-IDRI dataset, and the LNDb dataset for cross-validation. Notably, the algorithm achieved an approximately 300% improvement in reconstruction speed, with a rate of 44 ms per 3D image reconstruction compared to former 3D convolution-based methods.Significance.The XTransCT architecture has the potential to impact clinical practice by providing high-quality CT images faster and with substantially reduced radiation exposure for patients. The model's generalizability suggests it has the potential applicable in various healthcare settings.
Collapse
Affiliation(s)
- Chulong Zhang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, Guangdong, People's Republic of China
| | - Lin Liu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, Guangdong, People's Republic of China
| | - Jingjing Dai
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, Guangdong, People's Republic of China
| | - Xuan Liu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, Guangdong, People's Republic of China
| | - Wenfeng He
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, Guangdong, People's Republic of China
| | - Yinping Chan
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, Guangdong, People's Republic of China
| | - Yaoqin Xie
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, Guangdong, People's Republic of China
| | - Feng Chi
- State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, 510060, People's Republic of China
| | - Xiaokun Liang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, Guangdong, People's Republic of China
| |
Collapse
|
5
|
Xing X, Ser JD, Wu Y, Li Y, Xia J, Xu L, Firmin D, Gatehouse P, Yang G. HDL: Hybrid Deep Learning for the Synthesis of Myocardial Velocity Maps in Digital Twins for Cardiac Analysis. IEEE J Biomed Health Inform 2023; 27:5134-5142. [PMID: 35290192 DOI: 10.1109/jbhi.2022.3158897] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Synthetic digital twins based on medical data accelerate the acquisition, labelling and decision making procedure in digital healthcare. A core part of digital healthcare twins is model-based data synthesis, which permits the generation of realistic medical signals without requiring to cope with the modelling complexity of anatomical and biochemical phenomena producing them in reality. Unfortunately, algorithms for cardiac data synthesis have been so far scarcely studied in the literature. An important imaging modality in the cardiac examination is three-directional CINE multi-slice myocardial velocity mapping (3Dir MVM), which provides a quantitative assessment of cardiac motion in three orthogonal directions of the left ventricle. The long acquisition time and complex acquisition produce make it more urgent to produce synthetic digital twins of this imaging modality. In this study, we propose a hybrid deep learning (HDL) network, especially for synthetic 3Dir MVM data. Our algorithm is featured by a hybrid UNet and a Generative Adversarial Network with a foreground-background generation scheme. The experimental results show that from temporally down-sampled magnitude CINE images (six times), our proposed algorithm can still successfully synthesise high temporal resolution 3Dir MVM CMR data (PSNR=42.32) with precise left ventricle segmentation (DICE=0.92). These performance scores indicate that our proposed HDL algorithm can be implemented in real-world digital twins for myocardial velocity mapping data simulation. To the best of our knowledge, this work is the first one investigating digital twins of the 3Dir MVM CMR, which has shown great potential for improving the efficiency of clinical studies via synthesised cardiac data.
Collapse
|
6
|
Liu Y, Liu Y, Zhang P, Zhang Q, Wang L, Yan R, Li W, Gui Z. Spark plug defects detection based on improved Faster-RCNN algorithm. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2022; 30:709-724. [PMID: 35404300 DOI: 10.3233/xst-211120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The objective of this study is to apply an improved Faster-RCNN model in order to solve the problems of low detection accuracy and slow detection speed in spark plug defect detection. In detail, an attention module based symmetrical convolutional network (ASCN) is designed as the backbone to extract multi-scale features. Then, a multi-scale region generation network (MRPN), in which InceptionV2 is used to achieve sliding windows of different scales instead of a single sliding window, is proposed and tested. Additionally, a dataset of X-ray spark plug images is established, which contains 1,402 images. These images are divided into two subsets with a ratio of 4:1 for training and testing the improved Faster-RCNN model, respectively. The proposed model is transferred and learned on the pre-training model of MS COCO dataset. In the test experiments, the proposed method achieves an average accuracy of 89% and a recall of 97%. Compared with other Faster-RCNN models, YOLOv3, SSD and RetinaNet, our proposed new method improves the average accuracy by more than 6% and the recall by more than 2%. Furthermore, the new method can detect at 20fps when the input image size is 1024×1024×3 and can also be used for real-time automatic detection of spark plug defects.
Collapse
Affiliation(s)
- Yuhang Liu
- School of Computer Science and Technology, North University of China, Taiyuan, China
| | - Yi Liu
- School of Information and Communication Engineering, North University of China, Taiyuan, China
| | - Pengcheng Zhang
- School of Information and Communication Engineering, North University of China, Taiyuan, China
| | - Quan Zhang
- School of Information and Communication Engineering, North University of China, Taiyuan, China
| | - Lei Wang
- School of Information and Communication Engineering, North University of China, Taiyuan, China
| | - Rongbiao Yan
- School of Information and Communication Engineering, North University of China, Taiyuan, China
| | - Wenqiang Li
- School of Information and Communication Engineering, North University of China, Taiyuan, China
| | - Zhiguo Gui
- School of Information and Communication Engineering, North University of China, Taiyuan, China
| |
Collapse
|