1
|
Zhou D, Yu C, Liu W, Liu F. Registration of multimodal bone images based on edge similarity metaheuristic. Comput Biol Med 2024; 174:108379. [PMID: 38631115 DOI: 10.1016/j.compbiomed.2024.108379] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2023] [Revised: 03/09/2024] [Accepted: 03/24/2024] [Indexed: 04/19/2024]
Abstract
OBJECTIVE Blurry medical images affect the accuracy and efficiency of multimodal image registration, whose existing methods require further improvement. METHODS We propose an edge-based similarity registration method optimised for multimodal medical images, especially bone images, by a balance optimiser. First, we use a GPU (graphics processing unit) rendering simulation to convert computed tomography data into digitally reconstructed radiographs. Second, we introduce the improved cascaded edge network (ICENet), a convolutional neural network that extracts edge information of blurry medical images. Then, the bilateral Gaussian-weighted similarity of pairs of X-ray images and digitally reconstructed radiographs is measured. The a balanced optimiser is iteratively applied to finally estimate the best pose to perform image registration. RESULTS Experimental results show that, on average, the proposed method with ICENet outperforms other edge detection networks by 20%, 12%, 18.83%, and 11.93% in the overall Dice similarity, overall intersection over union, peak signal-to-noise ratio, and structural similarity index, respectively, with a registration success rate up to 90% and average reduction of 220% in registration time. CONCLUSION The proposed method with ICENet can achieve a high registration success rate even for blurry medical images, and its efficiency and robustness are higher than those of existing methods. SIGNIFICANCE Our proposal may be suitable for supporting medical diagnosis, radiation therapy, image-guided surgery, and other clinical applications.
Collapse
Affiliation(s)
- Dibin Zhou
- School of Information Science and Technology, Hangzhou Normal University, Zhejiang, China.
| | - Chen Yu
- School of Information Science and Technology, Hangzhou Normal University, Zhejiang, China.
| | - Wenhao Liu
- School of Information Science and Technology, Hangzhou Normal University, Zhejiang, China.
| | - Fuchang Liu
- School of Information Science and Technology, Hangzhou Normal University, Zhejiang, China.
| |
Collapse
|
2
|
Zhang J, Qing C, Li Y, Wang Y. BCSwinReg: A cross-modal attention network for CBCT-to-CT multimodal image registration. Comput Biol Med 2024; 171:107990. [PMID: 38377717 DOI: 10.1016/j.compbiomed.2024.107990] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Revised: 12/26/2023] [Accepted: 01/13/2024] [Indexed: 02/22/2024]
Abstract
Computed tomography (CT) and cone beam computed tomography (CBCT) registration plays an important role in radiotherapy. However, the poor quality of CBCT makes CBCT-CT multimodal registration challenging. Effective feature fusion and mapping often lead to better registration results for multimodal registration. Therefore, we proposed a new backbone network BCSwinReg and a cross-modal attention module CrossSwin. Specifically, a cross-modal attention CrossSwin is designed to promote multi-modal feature fusion, map the multi-modal domain to the common domain, and thus helping the network learn the correspondence between images better. Furthermore, a new network, BCSwinReg, is proposed to discover correspondence through cross-attention exchange information, obtain multi-level semantic information through a multi-resolution strategy, and finally integrate the deformation of multi-resolutions by the divide-conquer cascade method. We performed experiments on the publicly available 4D-Lung dataset to demonstrate the effectiveness of CrossSwin and BCSwinReg. Compared with VoxelMorph, the BCSwinReg has obtained performance improvements of 3.3% in Dice Similarity Coefficient (DSC) and 0.19 in the average 95% Hausdorff distance (HD95).
Collapse
Affiliation(s)
- Jieming Zhang
- The East China University of Science and Technology, Shanghai, 200237, China
| | - Chang Qing
- The East China University of Science and Technology, Shanghai, 200237, China.
| | - Yu Li
- The East China University of Science and Technology, Shanghai, 200237, China
| | - Yaqi Wang
- The East China University of Science and Technology, Shanghai, 200237, China
| |
Collapse
|
3
|
Liu L, Fan X, Liu H, Zhang C, Kong W, Dai J, Jiang Y, Xie Y, Liang X. QUIZ: An arbitrary volumetric point matching method for medical image registration. Comput Med Imaging Graph 2024; 112:102336. [PMID: 38244280 DOI: 10.1016/j.compmedimag.2024.102336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Revised: 12/02/2023] [Accepted: 01/09/2024] [Indexed: 01/22/2024]
Abstract
Rigid pre-registration involving local-global matching or other large deformation scenarios is crucial. Current popular methods rely on unsupervised learning based on grayscale similarity, but under circumstances where different poses lead to varying tissue structures, or where image quality is poor, these methods tend to exhibit instability and inaccuracies. In this study, we propose a novel method for medical image registration based on arbitrary voxel point of interest matching, called query point quizzer (QUIZ). QUIZ focuses on the correspondence between local-global matching points, specifically employing CNN for feature extraction and utilizing the Transformer architecture for global point matching queries, followed by applying average displacement for local image rigid transformation.We have validated this approach on a large deformation dataset of cervical cancer patients, with results indicating substantially smaller deviations compared to state-of-the-art methods. Remarkably, even for cross-modality subjects, it achieves results surpassing the current state-of-the-art.
Collapse
Affiliation(s)
- Lin Liu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; University of Chinese Academy of Sciences, Beijing, 100049, China.
| | - Xinxin Fan
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; University of Chinese Academy of Sciences, Beijing, 100049, China.
| | - Haoyang Liu
- Guangdong Medical University, Dongguan, 523808, China.
| | - Chulong Zhang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.
| | - Weibin Kong
- Guangdong Medical University, Dongguan, 523808, China.
| | - Jingjing Dai
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.
| | - Yuming Jiang
- Department of Radiation Oncology, Wake Forest University School of Medicine, Winston Salem, 27587, USA.
| | - Yaoqin Xie
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.
| | - Xiaokun Liang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.
| |
Collapse
|
4
|
Zheng T, Oda H, Hayashi Y, Nakamura S, Mori M, Takabatake H, Natori H, Oda M, Mori K. SGSR: style-subnets-assisted generative latent bank for large-factor super-resolution with registered medical image dataset. Int J Comput Assist Radiol Surg 2024; 19:493-506. [PMID: 38129364 DOI: 10.1007/s11548-023-03037-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 11/13/2023] [Indexed: 12/23/2023]
Abstract
PURPOSE We propose a large-factor super-resolution (SR) method for performing SR on registered medical image datasets. Conventional SR approaches use low-resolution (LR) and high-resolution (HR) image pairs to train a deep convolutional neural network (DCN). However, LR-HR images in medical imaging are commonly acquired from different imaging devices, and acquiring LR-HR image pairs needs registration. Registered LR-HR images have registration errors inevitably. Using LR-HR images with registration error for training an SR DCN causes collapsed SR results. To address these challenges, we introduce a novel SR approach designed specifically for registered LR-HR medical images. METHODS We propose style-subnets-assisted generative latent bank for large-factor super-resolution (SGSR) trained with registered medical image datasets. Pre-trained generative models named generative latent bank (GLB), which stores rich image priors, can be applied in SR to generate realistic and faithful images. We improve GLB by newly introducing style-subnets-assisted GLB (S-GLB). We also propose a novel inter-uncertainty loss to boost our method's performance. Introducing more spatial information by inputting adjacent slices further improved the results. RESULTS SGSR outperforms state-of-the-art (SOTA) supervised SR methods qualitatively and quantitatively on multiple datasets. SGSR achieved higher reconstruction accuracy than recently supervised baselines by increasing peak signal-to-noise ratio from 32.628 to 34.206 dB. CONCLUSION SGSR performs large-factor SR while given a registered LR-HR medical image dataset with registration error for training. SGSR's results have both realistic textures and accurate anatomical structures due to favorable quantitative and qualitative results. Experiments on multiple datasets demonstrated SGSR's superiority over other SOTA methods. SR medical images generated by SGSR are expected to improve the accuracy of pre-surgery diagnosis and reduce patient burden.
Collapse
Affiliation(s)
- Tong Zheng
- Graduate School of Informatics, Nagoya University, Nagoya, Japan.
| | - Hirohisa Oda
- School of Management and Information, University of Shizuoka, Shizuoka, Japan
| | - Yuichiro Hayashi
- Graduate School of Informatics, Nagoya University, Nagoya, Japan
| | - Shota Nakamura
- Nagoya University Graduate School of Medicine, Nagoya, Japan
| | - Masaki Mori
- Sapporo-Kosei General Hospital, Sapporo, Japan
| | | | | | - Masahiro Oda
- Graduate School of Informatics, Nagoya University, Nagoya, Japan
- Information Strategy Office, Information and Communications, Nagoya University, Nagoya, Japan
| | - Kensaku Mori
- Graduate School of Informatics, Nagoya University, Nagoya, Japan
- Information Technology Center, Nagoya University, Nagoya, Japan
- Research Center for Medical Bigdata, National Institute of Informatics, Tokyo, Japan
| |
Collapse
|
5
|
Chen P, Lin J, Guo Y, Pei X. Unsupervised Imbalanced Registration for Enhancing Accuracy and Stability in Medical Image Registration. Curr Med Imaging 2024; 20:CMIR-EPUB-137213. [PMID: 38258590 DOI: 10.2174/0115734056265001231122110350] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Revised: 09/20/2023] [Accepted: 10/06/2023] [Indexed: 01/24/2024]
Abstract
BACKGROUND Medical image registration plays an important role in several applications. Existing approaches using unsupervised learning encounter issues due to the data imbalance problem, as their target is usually a continuous variable. OBJECTIVE In this study, we introduce a novel approach known as Unsupervised Imbalanced Registration, to address the challenge of data imbalance and prevent overconfidence while increasing the accuracy and stability of 4D image registration. METHODS Our approach involves performing unsupervised image mixtures to smooth the input space, followed by unsupervised image registration to learn the continual target. We evaluated our method on 4D-Lung using two widely used unsupervised methods, namely VoxelMorph and ViT-V-Net. RESULTS Our findings demonstrate that our proposed method significantly enhances the mean accuracy of registration by 3%-10% on a small dataset while also reducing the accuracy variance by 10%. CONCLUSION Unsupervised Imbalanced Registration is a promising approach that is compatible with current unsupervised image registration methods applied to 4D images.
Collapse
Affiliation(s)
- Peizhi Chen
- College of Computer and Information Engineering, Xiamen University of Technology, Xiamen 361024, China
- Key Laboratory of Pattern Recognition and Image Understanding, Fujian, Xiamen 361024, China
| | - Jiacheng Lin
- College of Computer and Information Engineering, Xiamen University of Technology, Xiamen 361024, China
| | - Yifan Guo
- College of Computer and Information Engineering, Xiamen University of Technology, Xiamen 361024, China
| | - Xuan Pei
- College of Computer and Information Engineering, Xiamen University of Technology, Xiamen 361024, China
| |
Collapse
|
6
|
Zhou Z, Wang S, Hu J, Liu A, Qian X, Geng C, Zheng J, Chen G, Ji J, Dai Y. Unsupervised registration for liver CT-MR images based on the multiscale integrated spatial-weight module and dual similarity guidance. Comput Med Imaging Graph 2023; 108:102260. [PMID: 37343325 DOI: 10.1016/j.compmedimag.2023.102260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Revised: 04/16/2023] [Accepted: 06/08/2023] [Indexed: 06/23/2023]
Abstract
PURPOSE Multimodal registration is a key task in medical image analysis. Due to the large differences of multimodal images in intensity scale and texture pattern, it is a great challenge to design distinctive similarity metrics to guide deep learning-based multimodal image registration. Besides, since the limitation of the small receptive field, existing deep learning-based methods are mainly suitable for small deformation, but helpless for large deformation. To address the above issues, we present an unsupervised multimodal image registration method based on the multiscale integrated spatial-weight module and dual similarity guidance. METHODS In this method, a U-shape network with our multiscale integrated spatial-weight module is embedded into a multi-resolution image registration architecture to achieve end-to-end large deformation registration, where the spatial-weight module can effectively highlight the regions with large deformation and aggregate discriminative features, and the multi-resolution architecture further helps to solve the optimization problem of the network in a coarse-to-fine pattern. Furthermore, we introduce a special loss function based on dual similarity, which represents both global gray-scale similarity and local feature similarity, to optimize the unsupervised multimodal registration network. RESULTS We verified the effectiveness of the proposed method on liver CT-MR images. Experimental results indicate that the proposed method achieves the optimal DSC value and TRE value of 92.70 ± 1.75(%) and 6.52 ± 2.94(mm), compared with other state-of-the-art registration algorithms. CONCLUSION The proposed method can accurately estimate the large deformation field by aggregating multiscale features, and achieve higher registration accuracy and fast registration speed. Comparative experiments also demonstrate the effectiveness and generalization ability of the algorithm.
Collapse
Affiliation(s)
- Zhiyong Zhou
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou 215163, China; School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Suzhou 215163, China
| | - Shuaikun Wang
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou 215163, China; School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Suzhou 215163, China
| | - Jisu Hu
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou 215163, China; School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Suzhou 215163, China
| | - Anqi Liu
- University of California, Davis, United States
| | - Xusheng Qian
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou 215163, China; School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Suzhou 215163, China
| | - Chen Geng
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou 215163, China
| | - Jian Zheng
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou 215163, China
| | - Guangqiang Chen
- The Second Affiliated Hospital of Suzhou University, Suzhou 215000, China
| | - Jiansong Ji
- Key Laboratory of Imaging Diagnosis and Minimally Invasive Intervention Research, The Fifth Affiliated Hospital of Wenzhou Medical University, Lishui 323000, China.
| | - Yakang Dai
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou 215163, China; School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Suzhou 215163, China.
| |
Collapse
|
7
|
Rivas-Villar D, Hervella ÁS, Rouco J, Novo J. Joint keypoint detection and description network for color fundus image registration. Quant Imaging Med Surg 2023; 13:4540-4562. [PMID: 37456305 PMCID: PMC10347320 DOI: 10.21037/qims-23-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Accepted: 04/23/2023] [Indexed: 07/18/2023]
Abstract
Background Retinal imaging is widely used to diagnose many diseases, both systemic and eye-specific. In these cases, image registration, which is the process of aligning images taken from different viewpoints or moments in time, is fundamental to compare different images and to assess changes in their appearance, commonly caused by disease progression. Currently, the field of color fundus registration is dominated by classical methods, as deep learning alternatives have not shown sufficient improvement over classic methods to justify the added computational cost. However, deep learning registration methods are still considered beneficial as they can be easily adapted to different modalities and devices following a data-driven learning approach. Methods In this work, we propose a novel methodology to register color fundus images using deep learning for the joint detection and description of keypoints. In particular, we use an unsupervised neural network trained to obtain repeatable keypoints and reliable descriptors. These keypoints and descriptors allow to produce an accurate registration using RANdom SAmple Consensus (RANSAC). We train the method using the Messidor dataset and test it with the Fundus Image Registration Dataset (FIRE) dataset, both of which are publicly accessible. Results Our work demonstrates a color fundus registration method that is robust to changes in imaging devices and capture conditions. Moreover, we conduct multiple experiments exploring several of the method's parameters to assess their impact on the registration performance. The method obtained an overall Registration Score of 0.695 for the whole FIRE dataset (0.925 for category S, 0.352 for P, and 0.726 for A). Conclusions Our proposal improves the results of previous deep learning methods in every category and surpasses the performance of classical approaches in category A which has disease progression and thus represents the most relevant scenario for clinical practice as registration is commonly used in patients with diseases for disease monitoring purposes.
Collapse
Affiliation(s)
- David Rivas-Villar
- VARPA Group, A Coruña Biomedical Research Institute (INIBIC), University of A Coruña, Xubias de Arriba, A Coruña, Spain
- CITIC Research Centre, University of A Coruña, Campus de Elviña, A Coruña, Spain
| | - Álvaro S. Hervella
- VARPA Group, A Coruña Biomedical Research Institute (INIBIC), University of A Coruña, Xubias de Arriba, A Coruña, Spain
- CITIC Research Centre, University of A Coruña, Campus de Elviña, A Coruña, Spain
| | - José Rouco
- VARPA Group, A Coruña Biomedical Research Institute (INIBIC), University of A Coruña, Xubias de Arriba, A Coruña, Spain
- CITIC Research Centre, University of A Coruña, Campus de Elviña, A Coruña, Spain
| | - Jorge Novo
- VARPA Group, A Coruña Biomedical Research Institute (INIBIC), University of A Coruña, Xubias de Arriba, A Coruña, Spain
- CITIC Research Centre, University of A Coruña, Campus de Elviña, A Coruña, Spain
| |
Collapse
|
8
|
Chang Q, Lu C, Li M. Cascading Affine and B-spline Registration Method for Large Deformation Registration of Lung X-rays. J Digit Imaging 2023; 36:1262-1278. [PMID: 36788195 PMCID: PMC10287888 DOI: 10.1007/s10278-022-00763-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Revised: 12/16/2022] [Accepted: 12/17/2022] [Indexed: 02/16/2023] Open
Abstract
Accurate registration of lung X-rays is an important task in medical image analysis. However, the conventional methods usually cost a lot in running time, and the existing deep learning methods are hard to deal with the large deformation caused by respiratory and cardiac motion. In this paper, we attempt to use deep learning methods to deal with large deformation and enable it to achieve the accuracy of conventional methods. We proposed the cascading affine and B-spline network (CABN), which consists of convolutional cross-stitch affine block (CCAB) and B-splines U-net-like block (BUB) for large lung motion. CCAB makes use of the convolutional cross-stitch model to learn global features among images. And BUB adopts the idea of cubic B-splines which is suitable for large deformation. We separately demonstrated CCAB, BUB, and CABN on two chest X-ray datasets. The experimental results indicate that our methods are highly competitive both in accuracy and runtime when compared to both other deep learning methods and iterative conventional approaches. Moreover, CCAB also can be used for the preprocessing of non-rigid registration methods, replacing affine in conventional methods.
Collapse
Affiliation(s)
- Qing Chang
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai, China.
| | - Chenhao Lu
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai, China
| | - Mengke Li
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai, China
| |
Collapse
|
9
|
Gui P, He F, Ling BWK, Zhang D, Ge Z. Normal vibration distribution search-based differential evolution algorithm for multimodal bio medical image registration. Neural Comput Appl 2023; 35:1-23. [PMID: 37362574 PMCID: PMC10227826 DOI: 10.1007/s00521-023-08649-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Accepted: 05/02/2023] [Indexed: 06/28/2023]
Abstract
In linear registration, a floating image is spatially aligned with a reference image after performing a series of linear metric transformations. Additionally, linear registration is mainly considered a preprocessing version of nonrigid registration. To better accomplish the task of finding the optimal transformation in pairwise intensity-based medical image registration, in this work, we present an optimization algorithm called the normal vibration distribution search-based differential evolution algorithm (NVSA), which is modified from the Bernstein search-based differential evolution (BSD) algorithm. We redesign the search pattern of the BSD algorithm and import several control parameters as part of the fine-tuning process to reduce the difficulty of the algorithm. In this study, 23 classic optimization functions and 16 real-world patients (resulting in 41 multimodal registration scenarios) are used in experiments performed to statistically investigate the problem solving ability of the NVSA. Nine metaheuristic algorithms are used in the conducted experiments. When compared to the commonly utilized registration methods, such as ANTS, Elastix, and FSL, our method achieves better registration performance on the RIRE dataset. Moreover, we prove that our method can perform well with or without its initial spatial transformation in terms of different evaluation indicators, demonstrating its versatility and robustness for various clinical needs and applications. This study establishes the idea that metaheuristic-based methods can better accomplish linear registration tasks than the frequently used approaches; the proposed method demonstrates promise that it can solve real-world clinical and service problems encountered during nonrigid registration as a preprocessing approach.The source code of the NVSA is publicly available at https://github.com/PengGui-N/NVSA.
Collapse
Affiliation(s)
- Peng Gui
- School of Computer Science, Wuhan University, Wuhan, 430072 People’s Republic of China
- AIM Lab, Faculty of IT, Monash University, Melbourne, VIC 3800 Australia
- Monash-Airdoc Research, Monash University, Melbourne, VIC 3800 Australia
| | - Fazhi He
- School of Computer Science, Wuhan University, Wuhan, 430072 People’s Republic of China
| | - Bingo Wing-Kuen Ling
- School of Information Engineering, Guangdong University of Technology, Guangzhou, 510006 People’s Republic of China
| | - Dengyi Zhang
- School of Computer Science, Wuhan University, Wuhan, 430072 People’s Republic of China
| | - Zongyuan Ge
- AIM Lab, Faculty of IT, Monash University, Melbourne, VIC 3800 Australia
- Monash-Airdoc Research, Monash University, Melbourne, VIC 3800 Australia
| |
Collapse
|
10
|
Wu J, Fan Y. HNAS-Reg: Hierarchical Neural Architecture Search for Deformable Medical Image Registration. Proc IEEE Int Symp Biomed Imaging 2023; 2023:10.1109/isbi53787.2023.10230534. [PMID: 37790881 PMCID: PMC10544790 DOI: 10.1109/isbi53787.2023.10230534] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
Abstract
Convolutional neural networks (CNNs) have been widely used to build deep learning models for medical image registration, but manually designed network architectures are not necessarily optimal. This paper presents a hierarchical NAS framework (HNAS-Reg), consisting of both convolutional operation search and network topology search, to identify the optimal network architecture for deformable medical image registration. To mitigate the computational overhead and memory constraints, a partial channel strategy is utilized without losing optimization quality. Experiments on three datasets, consisting of 636 T1-weighted magnetic resonance images (MRIs), have demonstrated that the proposal method can build a deep learning model with improved image registration accuracy and reduced model size, compared with state-of-the-art image registration approaches, including one representative traditional approach and two unsupervised learning-based approaches.
Collapse
Affiliation(s)
- Jiong Wu
- Center for Biomedical Image Computing and Analytics, Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA
| | - Yong Fan
- Center for Biomedical Image Computing and Analytics, Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA
| |
Collapse
|
11
|
Guan S, Li T, Meng C, Ma L. Multi-mode information fusion navigation system for robot-assisted vascular interventional surgery. BMC Surg 2023; 23:51. [PMID: 36894932 PMCID: PMC9996930 DOI: 10.1186/s12893-023-01944-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Accepted: 02/23/2023] [Indexed: 03/11/2023] Open
Abstract
BACKGROUND Minimally invasive vascular intervention (MIVI) is a powerful technique for the treatment of cardiovascular diseases, such as abdominal aortic aneurysm (AAA), thoracic aortic aneurysm (TAA) and aortic dissection (AD). Navigation of traditional MIVI surgery mainly relies only on 2D digital subtraction angiography (DSA) images, which is hard to observe the 3D morphology of blood vessels and position the interventional instruments. The multi-mode information fusion navigation system (MIFNS) proposed in this paper combines preoperative CT images and intraoperative DSA images together to increase the visualization information during operations. RESULTS The main functions of MIFNS were evaluated by real clinical data and a vascular model. The registration accuracy of preoperative CTA images and intraoperative DSA images were less than 1 mm. The positioning accuracy of surgical instruments was quantitatively assessed using a vascular model and was also less than 1 mm. Real clinical data used to assess the navigation results of MIFNS on AAA, TAA and AD. CONCLUSIONS A comprehensive and effective navigation system was developed to facilitate the operation of surgeon during MIVI. The registration accuracy and positioning accuracy of the proposed navigation system were both less than 1 mm, which met the accuracy requirements of robot assisted MIVI.
Collapse
Affiliation(s)
- Shaoya Guan
- School of Engineers, Beijing Institute of Petrochemical Technology, Beijing, China
| | - Tianqi Li
- School of Information Engineering, Beijing Institute of Petrochemical Technology, Beijing, China
| | - Cai Meng
- School of Astronautics, Beihang University, Beijing, China
| | - Limei Ma
- School of Engineers, Beijing Institute of Petrochemical Technology, Beijing, China.
| |
Collapse
|
12
|
Berg A, Vandersmissen E, Wimmer M, Major D, Neubauer T, Lenis D, Cant J, Snoeckx A, Bühler K. Employing similarity to highlight differences: On the impact of anatomical assumptions in chest X-ray registration methods. Comput Biol Med 2023; 154:106543. [PMID: 36682179 DOI: 10.1016/j.compbiomed.2023.106543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 12/15/2022] [Accepted: 01/10/2023] [Indexed: 01/18/2023]
Abstract
To facilitate both the detection and the interpretation of findings in chest X-rays, comparison with a previous image of the same patient is very valuable to radiologists. Today, the most common approach for deep learning methods to automatically inspect chest X-rays disregards the patient history and classifies only single images as normal or abnormal. Nevertheless, several methods for assisting in the task of comparison through image registration have been proposed in the past. However, as we illustrate, they tend to miss specific types of pathological changes like cardiomegaly and effusion. Due to assumptions on fixed anatomical structures or their measurements of registration quality, they produce unnaturally deformed warp fields impacting visualization of differences between moving and fixed images. We aim to overcome these limitations, through a new paradigm based on individual rib pair segmentation for anatomy penalized registration. Our method proves to be a natural way to limit the folding percentage of the warp field to 1/6 of the state of the art while increasing the overlap of ribs by more than 25%, implying difference images showing pathological changes overlooked by other methods. We develop an anatomically penalized convolutional multi-stage solution on the National Institutes of Health (NIH) data set, starting from less than 25 fully and 50 partly labeled training images, employing sequential instance memory segmentation with hole dropout, weak labeling, coarse-to-fine refinement and Gaussian mixture model histogram matching. We statistically evaluate the benefits of our method and highlight the limits of currently used metrics for registration of chest X-rays.
Collapse
Affiliation(s)
- Astrid Berg
- VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Donau-City-Straße 11, Vienna, 1220, Austria.
| | - Eva Vandersmissen
- Agfa NV, Radiology Solutions R&D, Septestraat 27, 2640 Mortsel, Belgium.
| | - Maria Wimmer
- VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Donau-City-Straße 11, Vienna, 1220, Austria.
| | - David Major
- VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Donau-City-Straße 11, Vienna, 1220, Austria.
| | - Theresa Neubauer
- VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Donau-City-Straße 11, Vienna, 1220, Austria.
| | - Dimitrios Lenis
- VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Donau-City-Straße 11, Vienna, 1220, Austria.
| | - Jeroen Cant
- Agfa NV, Radiology Solutions R&D, Septestraat 27, 2640 Mortsel, Belgium.
| | - Annemiek Snoeckx
- Department of Radiology, Antwerp University Hospital, Drie Eikenstraat 655, 2650 Edegem, Belgium; Faculty of Medicine and Health Sciences, University of Antwerp, Universiteitsplein 1, 2610 Wilrijk, Belgium.
| | - Katja Bühler
- VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Donau-City-Straße 11, Vienna, 1220, Austria.
| |
Collapse
|
13
|
Gao F, Chen B, Zhou T, Luo H. Research on the effect of visceral artery Aneurysm's cardiac morphological variation on hemodynamic situation based on time-resolved CT-scan and computational fluid dynamics. Comput Methods Programs Biomed 2022; 221:106928. [PMID: 35701249 DOI: 10.1016/j.cmpb.2022.106928] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Revised: 05/30/2022] [Accepted: 05/31/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Muscular arteries and related aneurysms keep deforming during the cardiac cycle. However, current patient-specific computational fluid dynamics (CFD) analyses of aneurysms are usually based on individual cardiac phase images. The cardiac deformation and displacement characteristics of muscle arteries and aneurysms, as well as their impact on CFD results, have not been adequately explored. The present study tried to illustrate the cardiac morphological variation of visceral muscular arteries (VMAs) & aneurysms (VAAs) and evaluate its influence on the hemodynamic situation at lesion locations. METHODS Four-dimensional computed tomography angiogram (4D-CTA) images of six patients with VAAs were acquired. Medical image registration is used to capture cardiac variations of VMAs. The steady-state CFD simulation is performed on twelve different time-phase geometries. Deformation, displacement, wall shear stress (WSS), velocity, and pressure values at pathological locations are compared to illustrate the deforming characteristics of VAAs and their influence on CFD simulation results. RESULTS The deformation and displacement characteristics of lesion locations for six specific patients show a pulsatile pattern. Maximum displacements are always less than 4 mm. The ratio fluctuations of endovascular cavity volume and vascular inner wall surface area, which were employed to depict cardiac deformation, are always less than 20%. According to CFD simulations based on deformed VMAs, WSS has a larger coefficient of variation (COV) than velocity and pressure. Except for one patient's WSS, the COVs of different hemodynamic parameters obtained from simulation results are always less than 10%. CONCLUSIONS Based on 4D-CTA images, we confirmed that cardiovascular circulation has a periodic impact on the morphologic characteristics of VMAs. A wave that has extended throughout the studied region is observed. It has a dominant influence on the displacement of VMAs. According to CFD results, the influence of the VMAs' deformation and displacement on different hemodynamic parameters is distinct. The variance in WSS is more prominent compared to pressure and velocity. On most occasions, the influence of the VMAs' periodic deformation and displacement on simulation results is insignificant. However, the variant simulation results induced by deforming VMAs cannot be simply ignored.
Collapse
Affiliation(s)
- Fan Gao
- Department of Simulation Science and Technology, Boea Wisdom (Hangzhou) Network Technology Co., Ltd, Hangzhou 310000, China.
| | - Bing Chen
- Division of Vascular Surgery, Department of Surgery, Second Affiliated Hospital of Medical College, Zhejiang University, Hangzhou 310052, China.
| | - Tao Zhou
- Department of Simulation Science and Technology, Boea Wisdom (Hangzhou) Network Technology Co., Ltd, Hangzhou 310000, China.
| | - Huan Luo
- Department of Simulation Science and Technology, Boea Wisdom (Hangzhou) Network Technology Co., Ltd, Hangzhou 310000, China.
| |
Collapse
|
14
|
Rivas-Villar D, Hervella ÁS, Rouco J, Novo J. Color fundus image registration using a learning-based domain-specific landmark detection methodology. Comput Biol Med 2022; 140:105101. [PMID: 34875412 DOI: 10.1016/j.compbiomed.2021.105101] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Revised: 11/29/2021] [Accepted: 11/29/2021] [Indexed: 11/17/2022]
Abstract
Medical imaging, and particularly retinal imaging, allows to accurately diagnose many eye pathologies as well as some systemic diseases such as hypertension or diabetes. Registering these images is crucial to correctly compare key structures, not only within patients, but also to contrast data with a model or among a population. Currently, this field is dominated by complex classical methods because the novel deep learning methods cannot compete yet in terms of results and commonly used methods are difficult to adapt to the retinal domain. In this work, we propose a novel method to register color fundus images based on previous works which employed classical approaches to detect domain-specific landmarks. Instead, we propose to use deep learning methods for the detection of these highly-specific domain-related landmarks. Our method uses a neural network to detect the bifurcations and crossovers of the retinal blood vessels, whose arrangement and location are unique to each eye and person. This proposal is the first deep learning feature-based registration method in fundus imaging. These keypoints are matched using a method based on RANSAC (Random Sample Consensus) without the requirement to calculate complex descriptors. Our method was tested using the public FIRE dataset, although the landmark detection network was trained using the DRIVE dataset. Our method provides accurate results, a registration score of 0.657 for the whole FIRE dataset (0.908 for category S, 0.293 for category P and 0.660 for category A). Therefore, our proposal can compete with complex classical methods and beat the deep learning methods in the state of the art.
Collapse
Affiliation(s)
- David Rivas-Villar
- Centro de investigacion CITIC, Universidade da Coruña, 15 071, A Coruña, Spain; Grupo VARPA, Instituto de Investigacion Biomédica de A Coruña (INIBIC), Universidade da Coruña, 15 006, A Coruña, Spain.
| | - Álvaro S Hervella
- Centro de investigacion CITIC, Universidade da Coruña, 15 071, A Coruña, Spain; Grupo VARPA, Instituto de Investigacion Biomédica de A Coruña (INIBIC), Universidade da Coruña, 15 006, A Coruña, Spain.
| | - José Rouco
- Centro de investigacion CITIC, Universidade da Coruña, 15 071, A Coruña, Spain; Grupo VARPA, Instituto de Investigacion Biomédica de A Coruña (INIBIC), Universidade da Coruña, 15 006, A Coruña, Spain.
| | - Jorge Novo
- Centro de investigacion CITIC, Universidade da Coruña, 15 071, A Coruña, Spain; Grupo VARPA, Instituto de Investigacion Biomédica de A Coruña (INIBIC), Universidade da Coruña, 15 006, A Coruña, Spain.
| |
Collapse
|
15
|
Gu D, Liu G, Cao X, Xue Z, Shen D. A consistent deep registration network with group data modeling. Comput Med Imaging Graph 2021; 90:101904. [PMID: 33964791 DOI: 10.1016/j.compmedimag.2021.101904] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2020] [Revised: 03/12/2021] [Accepted: 03/14/2021] [Indexed: 11/15/2022]
Abstract
Medical image registration is a critical process for automated image computing, and ideally, the deformation field from one image to another should be smooth and inverse-consistent in order to bidirectionally align anatomical structures and to preserve their topology. Consistent registration can reduce bias caused by the order of input images, increase robustness, and improve reliability of subsequent quantitative analysis. Rigorous differential geometry constraints have been used in traditional methods to enforce the topological consistency but require comprehensive optimization and are time consuming. Recent studies show that deep learning-based registration methods can achieve comparable accuracy and are much faster than traditional registration. However, the estimated deformation fields do not necessarily possess inverse consistency when the order of two input images is swapped. To tackle this problem, we propose a new deep registration algorithm by employing the inverse consistency training strategy, so the forward and backward deformations of a pair of images can consistently align anatomical structures. In addition, since fine-tuned deformations among the training images reflect variability of shapes and appearances in a high-dimensional space, we formulate a group prior data modeling framework so that such statistics can be used to improve accuracy and consistency for registering new input image pairs. Specifically, we implement the wavelet principle component analysis (w-PCA) model of deformation fields and incorporate such prior constraints into the inverse-consistent deep registration network. We refer the proposed algorithm as consistent deep registration with group data modeling. Experiments on 3D brain magnetic resonance (MR) images showed that the unsupervised consistent deep registration and data modeling strategy yield consistent deformations after switching the input images and tolerated image variations well.
Collapse
Affiliation(s)
- Dongdong Gu
- Hunan University, Changsha, Hunan, China; Shanghai United Imaging Intelligence Co. Ltd, Shanghai, China
| | - Guocai Liu
- Hunan University, Changsha, Hunan, China
| | - Xiaohuan Cao
- Shanghai United Imaging Intelligence Co. Ltd, Shanghai, China
| | - Zhong Xue
- Shanghai United Imaging Intelligence Co. Ltd, Shanghai, China.
| | - Dinggang Shen
- Shanghai United Imaging Intelligence Co. Ltd, Shanghai, China; School of Biomedical Engineering, ShanghaiTech University, Shanghai, China; Department of Artificial Intelligence, Korea University, Seoul 02841, Republic of Korea.
| |
Collapse
|
16
|
Kaisarly D, Meierhofer D, El Gezawi M, Rösch P, Kunzelmann KH. Effects of flowable liners on the shrinkage vectors of bulk-fill composites. Clin Oral Investig 2021; 25:4927-4940. [PMID: 33506426 PMCID: PMC8342399 DOI: 10.1007/s00784-021-03801-2] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2020] [Accepted: 01/19/2021] [Indexed: 01/08/2023]
Abstract
Objectives This investigation evaluated the effect of flowable liners beneath a composite restoration applied via different methods on the pattern of shrinkage vectors. Methods Forty molars were divided into five groups (n = 8), and cylindrical cavities were prepared and bonded with a self-etch adhesive (AdheSe). Tetric EvoCeram Bulk Fill (TBF) was used as the filling material in all cavities. The flowable liners Tetric EvoFlow Bulk Fill (TEF) and SDR were used to line the cavity floor. In gp1-TBF, the flowable composite was not used. TEF was applied in a thin layer in gp2-fl/TEF + TBF and gp3-fl/TEF + TBFincremental. Two flowable composites with a layer thickness of 2 mm were compared in gp4-fl/TEF + TBF and gp5-fl/SDR + TBF. TEF and SDR were mixed with radiolucent glass beads, while air bubbles inherently present in TBF served as markers. Each material application was scanned twice by micro-computed tomography before and after light curing. Scans were subjected to image segmentation for calculation of the shrinkage vectors. Results The absence of a flowable liner resulted in the greatest shrinkage vectors. A thin flowable liner (gp2-fl/TEF + TBFbulk) resulted in larger overall shrinkage vectors for the whole restoration than a thick flowable liner (gp4-fl/TEF + TBF). A thin flowable liner and incremental application (gp3-fl/TEF + TBFincremental) yielded the smallest shrinkage vectors. SDR yielded slightly smaller shrinkage vectors for the whole restoration than that observed in gp4-fl/TEF + TBF. Conclusions Thick flowable liner layers had a more pronounced stress-relieving effect than thin layers regardless of the flowable liner type. Clinical relevance It is recommended to apply a flowable liner (thin or thick) beneath bulk-fill composites, preferably incrementally.
Collapse
Affiliation(s)
- Dalia Kaisarly
- Department of Conservative Dentistry and Periodontology, University Hospital, Ludwig-Maximilians-University, Goethestrasse 70, 80336, Munich, Germany. .,Biomaterials Department, Faculty of Oral and Dental Medicine, Cairo University, Cairo, Egypt.
| | - D Meierhofer
- Department of Conservative Dentistry and Periodontology, University Hospital, Ludwig-Maximilians-University, Goethestrasse 70, 80336, Munich, Germany
| | - M El Gezawi
- Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia
| | - P Rösch
- University of Applied Sciences, Augsburg, Germany
| | - K H Kunzelmann
- Department of Conservative Dentistry and Periodontology, University Hospital, Ludwig-Maximilians-University, Goethestrasse 70, 80336, Munich, Germany
| |
Collapse
|
17
|
Zachariadis O, Teatini A, Satpute N, Gómez-Luna J, Mutlu O, Elle OJ, Olivares J. Accelerating B-spline interpolation on GPUs: Application to medical image registration. Comput Methods Programs Biomed 2020; 193:105431. [PMID: 32283385 DOI: 10.1016/j.cmpb.2020.105431] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2019] [Revised: 02/14/2020] [Accepted: 03/02/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE B-spline interpolation (BSI) is a popular technique in the context of medical imaging due to its adaptability and robustness in 3D object modeling. A field that utilizes BSI is Image Guided Surgery (IGS). IGS provides navigation using medical images, which can be segmented and reconstructed into 3D models, often through BSI. Image registration tasks also use BSI to transform medical imaging data collected before the surgery and intra-operative data collected during the surgery into a common coordinate space. However, such IGS tasks are computationally demanding, especially when applied to 3D medical images, due to the complexity and amount of data involved. Therefore, optimization of IGS algorithms is greatly desirable, for example, to perform image registration tasks intra-operatively and to enable real-time applications. A traditional CPU does not have sufficient computing power to achieve these goals and, thus, it is preferable to rely on GPUs. In this paper, we introduce a novel GPU implementation of BSI to accelerate the calculation of the deformation field in non-rigid image registration algorithms. METHODS Our BSI implementation on GPUs minimizes the data that needs to be moved between memory and processing cores during loading of the input grid, and leverages the large on-chip GPU register file for reuse of input values. Moreover, we re-formulate our method as trilinear interpolations to reduce computational complexity and increase accuracy. To provide pre-clinical validation of our method and demonstrate its benefits in medical applications, we integrate our improved BSI into a registration workflow for compensation of liver deformation (caused by pneumoperitoneum, i.e., inflation of the abdomen) and evaluate its performance. RESULTS Our approach improves the performance of BSI by an average of 6.5× and interpolation accuracy by 2× compared to three state-of-the-art GPU implementations. Through pre-clinical validation, we demonstrate that our optimized interpolation accelerates a non-rigid image registration algorithm, which is based on the Free Form Deformation (FFD) method, by up to 34%. CONCLUSION Our study shows that we can achieve significant performance and accuracy gains with our novel parallelization scheme that makes effective use of the GPU resources. We show that our method improves the performance of real medical imaging registration applications used in practice today.
Collapse
Affiliation(s)
- Orestis Zachariadis
- Department of Electronics and Computer Engineering, Universidad de Cordoba, Córdoba, Spain.
| | - Andrea Teatini
- The Intervention Centre, Oslo University Hospital - Rikshospitalet, Oslo, Norway; Department of Informatics, University of Oslo, Oslo, Norway.
| | - Nitin Satpute
- Department of Electronics and Computer Engineering, Universidad de Cordoba, Córdoba, Spain
| | - Juan Gómez-Luna
- Department of Computer Science, ETH Zurich, Zurich, Switzerland
| | - Onur Mutlu
- Department of Computer Science, ETH Zurich, Zurich, Switzerland
| | - Ole Jakob Elle
- The Intervention Centre, Oslo University Hospital - Rikshospitalet, Oslo, Norway; Department of Informatics, University of Oslo, Oslo, Norway
| | - Joaquín Olivares
- Department of Electronics and Computer Engineering, Universidad de Cordoba, Córdoba, Spain
| |
Collapse
|
18
|
Wen T, Liu H, Lin L, Wang B, Hou J, Huang C, Pan T, Du Y. Multiswarm Artificial Bee Colony algorithm based on spark cloud computing platform for medical image registration. Comput Methods Programs Biomed 2020; 192:105432. [PMID: 32278250 DOI: 10.1016/j.cmpb.2020.105432] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/21/2019] [Revised: 02/25/2020] [Accepted: 03/02/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND Over the years, medical image registration has been widely used in various fields. However, different application characteristics, such as scale, computational complexity, and optimization goals, can cause problems. Therefore, developing an optimization algorithm based on clustering calculation is crucial. METHOD To solve the aforementioned problem, a multiswarm artificial bee colony (MS-ABC) multi-objective optimization algorithm based on clustering calculation is proposed. This algorithm can accelerate the resolution of complex problems on the Spark platform. Experiments show that the algorithm can optimize certain conventional complex problems and perform medical image registration tests. RESULT Results show that the MS-ABC algorithm demonstrates excellent performance in medical image registration tests. The optimization results of the MS-ABC algorithm for conventional problems are similar to those of existing algorithms; however, its performance is more time efficient for complex problems, especially when additional goals are needed. CONCLUSION The MS-ABC algorithm is applied to the Spark platform to accelerate the resolution of complex application problems. It can solve the problem of traditional algorithms regarding long calculation time, especially in the case of highly complex and large amounts of data, which can substantially improve data-processing efficiency.
Collapse
Affiliation(s)
- Tingxi Wen
- College of Engineering, Huaqiao University, Quanzhou, 362021, China; Fujian Provincial Key Laboratory of Data Intensive Computing, Quanzhou 362021, China; Fujian Key Laboratory of Autonomous Controllable Software, Quanzhou 362000, China; Postdoctoral Workstation of Linewell Software Company Limited, Quanzhou 362000, China.
| | - Haotian Liu
- College of Engineering, Huaqiao University, Quanzhou, 362021, China
| | - Luxin Lin
- College of Engineering, Huaqiao University, Quanzhou, 362021, China
| | - Bin Wang
- College of Engineering, Huaqiao University, Quanzhou, 362021, China
| | - Jigong Hou
- Fujian Key Laboratory of Autonomous Controllable Software, Quanzhou 362000, China; Postdoctoral Workstation of Linewell Software Company Limited, Quanzhou 362000, China.
| | - Chuanbo Huang
- College of Engineering, Huaqiao University, Quanzhou, 362021, China.
| | - Ting Pan
- College of Engineering, Huaqiao University, Quanzhou, 362021, China
| | - Yu Du
- College of Engineering, Huaqiao University, Quanzhou, 362021, China
| |
Collapse
|
19
|
Kaisarly D, El Gezawi M, Keßler A, Rösch P, Kunzelmann KH. Shrinkage vectors in flowable bulk-fill and conventional composites: bulk versus incremental application. Clin Oral Investig 2021; 25:1127-39. [PMID: 32653992 DOI: 10.1007/s00784-020-03412-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2020] [Accepted: 06/11/2020] [Indexed: 10/25/2022]
Abstract
OBJECTIVES Sufficient depth of cure allows bulk-fill composites to be placed with a 4-mm thickness. This study investigated bulk versus incremental application methods by visualizing shrinkage vectors in flowable bulk-fill and conventional composites. MATERIALS AND METHODS Cylindrical cavities (diameter = 6 mm, depth = 4 mm) were prepared in 24 teeth and then etched and bonded with OptiBond FL (Kerr, Italy). The composites were mixed with 2 wt% radiolucent glass beads. In one group, smart dentin replacement (SDR, Dentsply) was applied in bulk "SDR-bulk" (n = 8). In two groups, SDR and Tetric EvoFlow (Ivoclar Vivadent) were applied in two 2-mm-thick increments: "SDR-incremental" and "EvoFlow-incremental." Each material application was scanned with a micro-CT before and after light-curing (40 s, 1100 mW/cm2), and the shrinkage vectors were computed via image segmentation. Thereafter, linear polymerization shrinkage, shrinkage stress and gelation time were measured (n = 10). RESULTS The greatest shrinkage vectors were found in "SDR-bulk" and "SDR-increment2," and the smallest were found in "SDR-increment1-covered" and "EvoFlow-increment1-covered." Shrinkage away from and toward the cavity floor was greatest in "SDR-bulk" and "EvoFlow-increment2," respectively. The mean values of the shrinkage vectors were significantly different between groups (one-way ANOVA, Tamhane's T2 test, p < 0.05). The linear polymerization shrinkage and shrinkage stress were greatest in Tetric EvoFlow, and the gelation time was greatest in "SDR-bulk." CONCLUSIONS The bulk application method had greater values of shrinkage vectors and a higher debonding tendency at the cavity floor. CLINICAL RELEVANCE Incremental application remains the gold standard of composite insertion.
Collapse
|
20
|
Nowak S, Sprinkart AM. Synchronization and Alignment of Follow-up Examinations: a Practical and Educational Approach Using the DICOM Reference Coordinate System. J Digit Imaging 2020; 32:68-74. [PMID: 30109521 DOI: 10.1007/s10278-018-0117-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
This work presents an approach for synchronization and alignment of Digital Imaging and Communications in Medicine (DICOM) series from different studies that allows, e.g., easier reading of follow-up examinations. The proposed concept developed within the DICOM's patient-based reference coordinate system allows to synchronize all image data of two different studies/examinations based on a single registration. The most suitable DICOM series for registration could be set as default per protocol. Necessary basics regarding the DICOM standard and the used mathematical transformations are presented in an educative way to allow straightforward implementation in Picture Archiving And Communications Systems (PACS) and other DICOM tools. The proposed method for alignment of DICOM images is potentially also useful for various scientific tasks and machine-learning applications.
Collapse
Affiliation(s)
- Sebastian Nowak
- Department of Mathematics and Technology, University of Applied Sciences Koblenz, Joseph-Rovan-Allee 2, 53424, Remagen, Germany
| | - Alois M Sprinkart
- Department of Radiology, University of Bonn, Sigmund-Freud-Str. 25, 53127, Bonn, Germany.
| |
Collapse
|
21
|
Mathur P, Samei G, Tsang K, Lobo J, Salcudean S. On the feasibility of transperineal 3D ultrasound image guidance for robotic radical prostatectomy. Int J Comput Assist Radiol Surg 2019; 14:923-931. [PMID: 30863982 DOI: 10.1007/s11548-019-01938-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2019] [Accepted: 03/05/2019] [Indexed: 11/29/2022]
Abstract
PURPOSE Prostate cancer is the most prevalent form of male-specific cancers. Robot-assisted laparoscopic radical prostatectomy (RALRP) using the da Vinci surgical robot has become the gold-standard treatment for organ-confined prostate cancer. To improve intraoperative visualization of anatomical structures, many groups have developed techniques integrating transrectal ultrasound (TRUS) into the surgical workflow. TRUS, however, is intrusive and does not provide real-time volumetric imaging. METHODS We propose a proof-of-concept system offering an alternative noninvasive transperineal view of the prostate and surrounding structures using 3D ultrasound (US), allowing for full-volume imaging in any anatomical plane desired. The system aims to automatically track da Vinci surgical instruments and display a real-time US image registered to preoperative MRI. We evaluate the approach using a custom prostate phantom, an iU22 (Philips Healthcare, Bothell, WA) US machine with an xMATRIX X6-1 transducer, and a custom probe fixture. A novel registration method between the da Vinci kinematic frame and 3D US is presented. To evaluate the entire registration pipeline, we use a previously developed MRI to US deformable registration algorithm. RESULTS Our US calibration technique yielded a registration error of 0.84 mm, compared to 1.76 mm with existing methods. We evaluated overall system error with a prostate phantom, achieving a target registration error of 2.55 mm. CONCLUSION Transperineal imaging using 3D US is a promising approach for image guidance during RALRP. Preliminary results suggest this system is comparable to existing guidance systems using TRUS. With further development and testing, we believe our system has the potential to improve patient outcomes by imaging anatomical structures and prostate cancer in real time.
Collapse
Affiliation(s)
- Prateek Mathur
- Department of Electrical and Computer Engineering, University of British Columbia, 2332 Main Mall, Vancouver, BC, V6T 1Z4, Canada.
| | - Golnoosh Samei
- Department of Electrical and Computer Engineering, University of British Columbia, 2332 Main Mall, Vancouver, BC, V6T 1Z4, Canada
| | - Keith Tsang
- Department of Electrical and Computer Engineering, University of British Columbia, 2332 Main Mall, Vancouver, BC, V6T 1Z4, Canada
| | - Julio Lobo
- Department of Electrical and Computer Engineering, University of British Columbia, 2332 Main Mall, Vancouver, BC, V6T 1Z4, Canada
| | - Septimiu Salcudean
- Department of Electrical and Computer Engineering, University of British Columbia, 2332 Main Mall, Vancouver, BC, V6T 1Z4, Canada
| |
Collapse
|
22
|
Hu Y, Modat M, Gibson E, Li W, Ghavami N, Bonmati E, Wang G, Bandula S, Moore CM, Emberton M, Ourselin S, Noble JA, Barratt DC, Vercauteren T. Weakly-supervised convolutional neural networks for multimodal image registration. Med Image Anal 2018; 49:1-13. [PMID: 30007253 PMCID: PMC6742510 DOI: 10.1016/j.media.2018.07.002] [Citation(s) in RCA: 158] [Impact Index Per Article: 26.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2018] [Revised: 06/20/2018] [Accepted: 07/03/2018] [Indexed: 11/28/2022]
Abstract
One of the fundamental challenges in supervised learning for multimodal image registration is the lack of ground-truth for voxel-level spatial correspondence. This work describes a method to infer voxel-level transformation from higher-level correspondence information contained in anatomical labels. We argue that such labels are more reliable and practical to obtain for reference sets of image pairs than voxel-level correspondence. Typical anatomical labels of interest may include solid organs, vessels, ducts, structure boundaries and other subject-specific ad hoc landmarks. The proposed end-to-end convolutional neural network approach aims to predict displacement fields to align multiple labelled corresponding structures for individual image pairs during the training, while only unlabelled image pairs are used as the network input for inference. We highlight the versatility of the proposed strategy, for training, utilising diverse types of anatomical labels, which need not to be identifiable over all training image pairs. At inference, the resulting 3D deformable image registration algorithm runs in real-time and is fully-automated without requiring any anatomical labels or initialisation. Several network architecture variants are compared for registering T2-weighted magnetic resonance images and 3D transrectal ultrasound images from prostate cancer patients. A median target registration error of 3.6 mm on landmark centroids and a median Dice of 0.87 on prostate glands are achieved from cross-validation experiments, in which 108 pairs of multimodal images from 76 patients were tested with high-quality anatomical labels.
Collapse
Affiliation(s)
- Yipeng Hu
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK; Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK.
| | - Marc Modat
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Eli Gibson
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Wenqi Li
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Nooshin Ghavami
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Ester Bonmati
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Guotai Wang
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Steven Bandula
- Centre for Medical Imaging, University College London, London, UK
| | - Caroline M Moore
- Division of Surgery and Interventional Science, University College London, London, UK
| | - Mark Emberton
- Division of Surgery and Interventional Science, University College London, London, UK
| | - Sébastien Ourselin
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - J Alison Noble
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| | - Dean C Barratt
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Tom Vercauteren
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| |
Collapse
|
23
|
Kaisarly D, El Gezawi M, Lai G, Jin J, Rösch P, Kunzelmann KH. Effects of occlusal cavity configuration on 3D shrinkage vectors in a flowable composite. Clin Oral Investig 2018; 22:2047-56. [PMID: 29248963 DOI: 10.1007/s00784-017-2304-y] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2016] [Accepted: 12/07/2017] [Indexed: 10/18/2022]
Abstract
OBJECTIVE The objective of this study was to investigate the effects of cavity configuration on the shrinkage vectors of a flowable resin-based composite (RBC) placed in occlusal cavities. MATERIALS AND METHODS Twenty-seven human molars were divided into three groups (n = 9) according to cavity configuration: "adhesive," "diverging," and "cylindrical." The "adhesive" cavity represented beveled enamel margins and occlusally converging walls, the "diverging" cavity had occlusally diverging walls, and the "cylindrical" cavity had parallel walls (diameter = 6 mm); all cavities were 3 mm deep. Each prepared cavity was treated with a self-etch adhesive (Adper Easy Bond, 3 M ESPE) and filled with a flowable RBC (Tetric EvoFlow, Ivoclar Vivadent) to which had been added 2 wt% traceable glass beads. Two micro-CT scans were performed on each sample (uncured and cured). The scans were then subjected to medical image registration for shrinkage vector calculation. Shrinkage vectors were evaluated three-dimensionally (3D) and in the axial direction. RESULTS The "adhesive" group had the greatest mean 3D shrinkage vector lengths and upward movement (31.1 ± 10.9 μm; - 13.7 ± 12.1 μm), followed by the "diverging" (27.4 ± 12.1 μm; - 5.7 ± 17.2 μm) and "cylindrical" groups (23.3 ± 11.1 μm; - 3.7 ± 13.6 μm); all groups differed significantly (p < 0.001 for each comparison, one-way ANOVA, Tamhane's T2). CONCLUSION The values and direction of the shrinkage vectors as well as interfacial debonding varied according to the cavity configuration. CLINICAL RELEVANCE Cavity configuration in terms of wall orientation and beveling of enamel margin influences the shrinkage pattern of composites.
Collapse
|
24
|
Kaisarly D, El Gezawi M, Xu X, Rösch P, Kunzelmann KH. Shrinkage vectors of a flowable composite in artificial cavity models with different boundary conditions: Ceramic and Teflon. J Mech Behav Biomed Mater 2017; 77:414-421. [PMID: 29020664 DOI: 10.1016/j.jmbbm.2017.10.004] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2017] [Revised: 09/28/2017] [Accepted: 10/02/2017] [Indexed: 10/18/2022]
Abstract
Polymerization shrinkage of dental resin composites leads to stress build-up at the tooth-restoration interface that predisposes the restoration to debonding. In contrast to the heterogeneity of enamel and dentin, this study investigated the effect of boundary conditions in artificial cavity models such as ceramic and Teflon. Ceramic serves as a homogenous substrate that provides optimal bonding conditions, which we presented in the form of etched and silanized ceramic in addition to an etched, silanized and bonded ceramic cavity. In contrast, the Teflon cavity presented a non-adhesive boundary condition that provided an exaggerated condition of poor bonding as in the case of contamination during the application procedure or a poor bonding substrate such as sclerotic or deep dentin. The greatest 3D shrinkage vectors and movement in the axial direction were observed in the ceramic cavity with the bonding agent followed by the silanized ceramic cavity, and smallest shrinkage vectors and axial movements were observed in the Teflon cavity. The shrinkage vectors in the ceramic cavities exhibited downward movement toward the cavity bottom with great downward shrinkage of the free surface. The shrinkage vectors in the Teflon cavity pointed towards the center of the restoration with lateral movement greater at one side denoting the site of first detachment from the cavity walls. These results proved that the boundary conditions, in terms of bonding substrates, significantly influenced the shrinkage direction.
Collapse
Affiliation(s)
- Dalia Kaisarly
- Department of Operative Dentistry and Periodontology, University Hospital, LMU Munich, Goethestrasse 70, 80336 Munich, Germany; Biomaterials Department, Faculty of Oral and Dental Medicine, Cairo University, Cairo, Egypt.
| | - Moataz El Gezawi
- Department of Restorative Dentistry, University of Dammam, Dammam, Saudi Arabia
| | - Xiaohui Xu
- Department of Operative Dentistry and Periodontology, University Hospital, LMU Munich, Goethestrasse 70, 80336 Munich, Germany
| | - Peter Rösch
- Faculty of Computer Science, University of Applied Sciences, Augsburg, Germany
| | - Karl-Heinz Kunzelmann
- Department of Operative Dentistry and Periodontology, University Hospital, LMU Munich, Goethestrasse 70, 80336 Munich, Germany
| |
Collapse
|
25
|
Viergever MA, Maintz JBA, Klein S, Murphy K, Staring M, Pluim JPW. A survey of medical image registration - under review. Med Image Anal 2016; 33:140-144. [PMID: 27427472 DOI: 10.1016/j.media.2016.06.030] [Citation(s) in RCA: 113] [Impact Index Per Article: 14.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2016] [Revised: 06/17/2016] [Accepted: 06/17/2016] [Indexed: 01/28/2023]
Abstract
A retrospective view on the past two decades of the field of medical image registration is presented, guided by the article "A survey of medical image registration" (Maintz and Viergever, 1998). It shows that the classification of the field introduced in that article is still usable, although some modifications to do justice to advances in the field would be due. The main changes over the last twenty years are the shift from extrinsic to intrinsic registration, the primacy of intensity-based registration, the breakthrough of nonlinear registration, the progress of inter-subject registration, and the availability of generic image registration software packages. Two problems that were called urgent already 20 years ago, are even more urgent nowadays: Validation of registration methods, and translation of results of image registration research to clinical practice. It may be concluded that the field of medical image registration has evolved, but still is in need of further development in various aspects.
Collapse
Affiliation(s)
- Max A Viergever
- Image Sciences Institute, University Medical Center Utrecht, Utrecht, The Netherlands.
| | | | - Stefan Klein
- Biomedical Imaging Group Rotterdam, Departments of Medical Informatics and Radiology, Erasmus MC, Rotterdam, The Netherlands.
| | - Keelin Murphy
- INFANT Research Centre, University College Cork, Cork, Ireland.
| | - Marius Staring
- Division of Image Processing, Leiden University Medical Center, Leiden, The Netherlands.
| | - Josien P W Pluim
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands; Image Sciences Institute, University Medical Center Utrecht, Utrecht, The Netherlands.
| |
Collapse
|
26
|
Alam F, Rahman SU, Khusro S, Ullah S, Khalil A. Evaluation of Medical Image Registration Techniques Based on Nature and Domain of the Transformation. J Med Imaging Radiat Sci 2016; 47:178-193. [PMID: 31047182 DOI: 10.1016/j.jmir.2015.12.081] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2015] [Revised: 12/14/2015] [Accepted: 12/15/2015] [Indexed: 11/29/2022]
Abstract
A lot of research has been done during the past 20 years in the area of medical image registration for obtaining detailed, important, and complementary information from two or more images and aligning them into a single, more informative image. Nature of the transformation and domain of the transformation are two important medical image registration techniques that deal with characters of objects (motions) in images. This article presents a detailed survey of the registration techniques that belong to both categories with detailed elaboration on their features, issues, and challenges. An investigation estimating similarity and dissimilarity measures and performance evaluation is the main objective of this work. This article also provides reference knowledge in a compact form for researchers and clinicians looking for the proper registration technique for a particular application.
Collapse
Affiliation(s)
- Fakhre Alam
- Department of Computer Science & IT, University of Malakand, Khyber Pakhtunkhwa, Pakistan.
| | - Sami Ur Rahman
- Department of Computer Science & IT, University of Malakand, Khyber Pakhtunkhwa, Pakistan
| | - Shah Khusro
- Department of Computer Science, University of Peshawar, Peshawar, Pakistan
| | - Sehat Ullah
- Department of Computer Science & IT, University of Malakand, Khyber Pakhtunkhwa, Pakistan
| | - Adnan Khalil
- Department of Computer Science & IT, University of Malakand, Khyber Pakhtunkhwa, Pakistan
| |
Collapse
|
27
|
Demirović D, Šerifović-Trbalić A, Prljača N, Cattin PC. Bilateral filter regularized accelerated Demons for improved discontinuity preserving registration. Comput Med Imaging Graph 2014; 40:94-9. [PMID: 25541494 DOI: 10.1016/j.compmedimag.2014.11.011] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2013] [Revised: 11/14/2014] [Accepted: 11/20/2014] [Indexed: 11/30/2022]
Abstract
The classical accelerated Demons algorithm uses Gaussian smoothing to penalize oscillatory motion in the displacement fields during registration. This well known method uses the L2 norm for regularization. Whereas the L2 norm is known for producing well behaving smooth deformation fields it cannot properly deal with discontinuities often seen in the deformation field as the regularizer cannot differentiate between discontinuities and smooth part of motion field. In this paper we propose replacement the Gaussian filter of the accelerated Demons with a bilateral filter. In contrast the bilateral filter not only uses information from displacement field but also from the image intensities. In this way we can smooth the motion field depending on image content as opposed to the classical Gaussian filtering. By proper adjustment of two tunable parameters one can obtain more realistic deformations in a case of discontinuity. The proposed approach was tested on 2D and 3D datasets and showed significant improvements in the Target Registration Error (TRE) for the well known POPI dataset. Despite the increased computational complexity, the improved registration result is justified in particular abdominal data sets where discontinuities often appear due to sliding organ motion.
Collapse
Affiliation(s)
- D Demirović
- Faculty of Electrical Engineering, University of Tuzla, Bosnia and Herzegovina.
| | - A Šerifović-Trbalić
- Faculty of Electrical Engineering, University of Tuzla, Bosnia and Herzegovina.
| | - N Prljača
- Faculty of Electrical Engineering, University of Tuzla, Bosnia and Herzegovina.
| | - Ph C Cattin
- Medical Image Analysis Center (MIAC), University of Basel, Basel, Switzerland.
| |
Collapse
|