1
|
Zhu X, Ding M, Zhang X. Free form deformation and symmetry constraint‐based multi‐modal brain image registration using generative adversarial nets. CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY 2023. [DOI: 10.1049/cit2.12159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/13/2023] Open
Affiliation(s)
- Xingxing Zhu
- Department of Biomedical Engineering School of Life Science and Technology Ministry of Education Key Laboratory of Molecular Biophysics Huazhong University of Science and Technology Wuhan China
| | - Mingyue Ding
- Department of Biomedical Engineering School of Life Science and Technology Ministry of Education Key Laboratory of Molecular Biophysics Huazhong University of Science and Technology Wuhan China
| | - Xuming Zhang
- Department of Biomedical Engineering School of Life Science and Technology Ministry of Education Key Laboratory of Molecular Biophysics Huazhong University of Science and Technology Wuhan China
| |
Collapse
|
2
|
Zou J, Gao B, Song Y, Qin J. A review of deep learning-based deformable medical image registration. Front Oncol 2022; 12:1047215. [PMID: 36568171 PMCID: PMC9768226 DOI: 10.3389/fonc.2022.1047215] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2022] [Accepted: 11/08/2022] [Indexed: 12/12/2022] Open
Abstract
The alignment of images through deformable image registration is vital to clinical applications (e.g., atlas creation, image fusion, and tumor targeting in image-guided navigation systems) and is still a challenging problem. Recent progress in the field of deep learning has significantly advanced the performance of medical image registration. In this review, we present a comprehensive survey on deep learning-based deformable medical image registration methods. These methods are classified into five categories: Deep Iterative Methods, Supervised Methods, Unsupervised Methods, Weakly Supervised Methods, and Latest Methods. A detailed review of each category is provided with discussions about contributions, tasks, and inadequacies. We also provide statistical analysis for the selected papers from the point of view of image modality, the region of interest (ROI), evaluation metrics, and method categories. In addition, we summarize 33 publicly available datasets that are used for benchmarking the registration algorithms. Finally, the remaining challenges, future directions, and potential trends are discussed in our review.
Collapse
Affiliation(s)
- Jing Zou
- Hong Kong Polytechnic University, Hong Kong, Hong Kong SAR, China
| | | | | | | |
Collapse
|
3
|
Zhu X, Huang Z, Ding M, Zhang X. Non-rigid multi-modal brain image registration based on two-stage generative adversarial nets. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.07.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
4
|
Soleimani M, Aghagolzadeh A, Ezoji M. Symmetry-based representation for registration of multimodal images. Med Biol Eng Comput 2022; 60:1015-1032. [PMID: 35171412 DOI: 10.1007/s11517-022-02515-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2020] [Accepted: 01/21/2022] [Indexed: 11/24/2022]
Abstract
We propose a new two-dimensional structural representation method for registration of multimodal images by using the local structural symmetry of images, which is similar at different modalities. The symmetry is measured in various orientations and the best is mapped and used for the representation image. The optimum performance is obtained when using only two different orientations, which is called binary dominant symmetry representation (BDSR). This representation is highly robust to noise and intensity non-uniformity. We also propose a new objective function based on L2 distance with low sensitivity to the overlapping region. Then, five different meta-heuristic algorithms are comparatively applied. Two of them have been used for the first time on image registration. BDSR remarkably outperforms the previous successful representations, such as entropy images, self-similarity context, and modality-independent local binary pattern, as well as mutual information-based registration, in terms of success rate, runtime, convergence error, and representation construction.
Collapse
Affiliation(s)
- Mojtaba Soleimani
- Faculty of Electrical and Computer Engineering, Babol Noshirvani University of Technology, Babol, Iran
| | - Ali Aghagolzadeh
- Faculty of Electrical and Computer Engineering, Babol Noshirvani University of Technology, Babol, Iran.
| | - Mehdi Ezoji
- Faculty of Electrical and Computer Engineering, Babol Noshirvani University of Technology, Babol, Iran
| |
Collapse
|
5
|
Kustner T, Pan J, Qi H, Cruz G, Gilliam C, Blu T, Yang B, Gatidis S, Botnar R, Prieto C. LAPNet: Non-Rigid Registration Derived in k-Space for Magnetic Resonance Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3686-3697. [PMID: 34242163 DOI: 10.1109/tmi.2021.3096131] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Physiological motion, such as cardiac and respiratory motion, during Magnetic Resonance (MR) image acquisition can cause image artifacts. Motion correction techniques have been proposed to compensate for these types of motion during thoracic scans, relying on accurate motion estimation from undersampled motion-resolved reconstruction. A particular interest and challenge lie in the derivation of reliable non-rigid motion fields from the undersampled motion-resolved data. Motion estimation is usually formulated in image space via diffusion, parametric-spline, or optical flow methods. However, image-based registration can be impaired by remaining aliasing artifacts due to the undersampled motion-resolved reconstruction. In this work, we describe a formalism to perform non-rigid registration directly in the sampled Fourier space, i.e. k-space. We propose a deep-learning based approach to perform fast and accurate non-rigid registration from the undersampled k-space data. The basic working principle originates from the Local All-Pass (LAP) technique, a recently introduced optical flow-based registration. The proposed LAPNet is compared against traditional and deep learning image-based registrations and tested on fully-sampled and highly-accelerated (with two undersampling strategies) 3D respiratory motion-resolved MR images in a cohort of 40 patients with suspected liver or lung metastases and 25 healthy subjects. The proposed LAPNet provided consistent and superior performance to image-based approaches throughout different sampling trajectories and acceleration factors.
Collapse
|
6
|
Wang X, Ning G, Yang N, Zhang X, Zhang H, Liao H. An Unsupervised Convolution Neural Network for Deformable Registration of Mono/Multi-Modality Medical Images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3455-3458. [PMID: 34891983 DOI: 10.1109/embc46164.2021.9630731] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Image registration is a fundamental and crucial step in medical image analysis. However, due to the differences between mono-mode and multi-mode registration tasks and the complexity of the corresponding relationship between multimode image intensity, the existing unsupervised methods based on deep learning can hardly achieve the two registration tasks simultaneously. In this paper, we proposed a novel approach to register both mono- and multi-mode images $\color{blue}{\text{in a differentiable }}$. By approximately calculating the mutual information in a $\color{blue}{\text{differentiable}}$ form and combining it with CNN, the deformation field can be predicted quickly and accurately without any prior information about the image intensity relationship. The registration process is implemented in an unsupervised manner, avoiding the need for the ground truth of the deformation field. We utilize two public datasets to evaluate the performance of the algorithm for mono-mode and multi-mode image registration, which confirms the effectiveness and feasibility of our method. In addition, the experiments on patient data also demonstrate the practicability and robustness of the proposed method.
Collapse
|
7
|
Ha IY, Heinrich MP. Modality-agnostic self-supervised deep feature learning and fast instance optimisation for multimodal fusion in ultrasound-guided interventions. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 211:106374. [PMID: 34601186 DOI: 10.1016/j.cmpb.2021.106374] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Accepted: 08/22/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Fast and robust alignment of pre-operative MRI planning scans to intra-operative ultrasound is an important aspect for automatically supporting image-guided interventions. Thus far, learning-based approaches have failed to tackle the intertwined objectives of fast inference computation time and robustness to unexpectedly large motion and misalignment. In this work, we propose a novel method that decouples deep feature learning and the computation of long ranging local displacement probability maps from fast and robust global transformation prediction. METHODS In our approach, we firstly train a convolutional neural network (CNN) to extract modality-agnostic features with sub-second computation times for both 3D volumes during inference. Using sparsity-based network weight pruning, the model complexity and computation times can be substantially reduced. Based on these features, a large discretized search range of 3D motion vectors is explored to compute a probabilistic displacement map for each control point. These 3D probability maps are employed in our newly proposed, computationally efficient, instance optimisation that robustly estimates the most likely globally linear transformation that best reflects the local displacement beliefs subject to outlier rejection. RESULTS Our experimental validation demonstrates state-of-the-art accuracy on the challenging CuRIOUS dataset with average target registration errors of 2.50 mm, model size of only 1.2 MByte and run times of approx. 3 seconds for a full 3D multimodal registration. CONCLUSION We show that a significant improvement in accuracy and robustness can be gained with instance optimisation and our fast self-supervised deep learning model can achieve state-of-the-art accuracy on challenging registration task in only 3 seconds.
Collapse
Affiliation(s)
- In Young Ha
- Institute of Medical Informatics, University of Luebeck, Ratzeburger Allee 160, 23564 Luebeck, Germany
| | - Mattias P Heinrich
- Institute of Medical Informatics, University of Luebeck, Ratzeburger Allee 160, 23564 Luebeck, Germany.
| |
Collapse
|
8
|
Regional Localization of Mouse Brain Slices Based on Unified Modal Transformation. Symmetry (Basel) 2021. [DOI: 10.3390/sym13060929] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Brain science research often requires accurate localization and quantitative analysis of neuronal activity in different brain regions. The premise of related analysis is to determine the brain region of each site on the brain slice by referring to the Allen Reference Atlas (ARA), namely the regional localization of the brain slice. The image registration methodology can be used to solve the problem of regional localization. However, the conventional multi-modal image registration method is not satisfactory because of the complexity of modality between the brain slice and the ARA. Inspired by the idea that people can automatically ignore noise and establish correspondence based on key regions, we proposed a novel method known as the Joint Enhancement of Multimodal Information (JEMI) network, which is based on a symmetric encoder–decoder. In this way, the brain slice and the ARA are converted into a segmentation map with unified modality, which greatly reduces the difficulty of registration. Furthermore, combined with the diffeomorphic registration algorithm, the existing topological structure was preserved. The results indicate that, compared with the existing methods, the method proposed in this study can effectively overcome the influence of non-unified modal images and achieve accurate and rapid localization of the brain slice.
Collapse
|
9
|
Chen Z, Xu Z, Gui Q, Yang X, Cheng Q, Hou W, Ding M. Self-learning based medical image representation for rigid real-time and multimodal slice-to-volume registration. Inf Sci (N Y) 2020. [DOI: 10.1016/j.ins.2020.06.072] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
10
|
Sheng K. Artificial intelligence in radiotherapy: a technological review. Front Med 2020; 14:431-449. [PMID: 32728877 DOI: 10.1007/s11684-020-0761-1] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2019] [Accepted: 02/14/2020] [Indexed: 12/19/2022]
Abstract
Radiation therapy (RT) is widely used to treat cancer. Technological advances in RT have occurred in the past 30 years. These advances, such as three-dimensional image guidance, intensity modulation, and robotics, created challenges and opportunities for the next breakthrough, in which artificial intelligence (AI) will possibly play important roles. AI will replace certain repetitive and labor-intensive tasks and improve the accuracy and consistency of others, particularly those with increased complexity because of technological advances. The improvement in efficiency and consistency is important to manage the increasing cancer patient burden to the society. Furthermore, AI may provide new functionalities that facilitate satisfactory RT. The functionalities include superior images for real-time intervention and adaptive and personalized RT. AI may effectively synthesize and analyze big data for such purposes. This review describes the RT workflow and identifies areas, including imaging, treatment planning, quality assurance, and outcome prediction, that benefit from AI. This review primarily focuses on deep-learning techniques, although conventional machine-learning techniques are also mentioned.
Collapse
Affiliation(s)
- Ke Sheng
- Department of Radiation Oncology, University of California, Los Angeles, CA, 90095, USA.
| |
Collapse
|
11
|
Hwang YJ, Lee JG, Moon UC, Park HH. SSD-TSEFFM: New SSD Using Trident Feature and Squeeze and Extraction Feature Fusion. SENSORS 2020; 20:s20133630. [PMID: 32605288 PMCID: PMC7374356 DOI: 10.3390/s20133630] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/30/2020] [Revised: 06/23/2020] [Accepted: 06/24/2020] [Indexed: 11/16/2022]
Abstract
The single shot multi-box detector (SSD) exhibits low accuracy in small-object detection; this is because it does not consider the scale contextual information between its layers, and the shallow layers lack adequate semantic information. To improve the accuracy of the original SSD, this paper proposes a new single shot multi-box detector using trident feature and squeeze and extraction feature fusion (SSD-TSEFFM); this detector employs the trident network and the squeeze and excitation feature fusion module. Furthermore, a trident feature module (TFM) is developed, inspired by the trident network, to consider the scale contextual information. The use of this module makes the proposed model robust to scale changes owing to the application of dilated convolution. Further, the squeeze and excitation block feature fusion module (SEFFM) is used to provide more semantic information to the model. The SSD-TSEFFM is compared with the faster regions with convolution neural network features (RCNN) (2015), SSD (2016), and DF-SSD (2020) on the PASCAL VOC 2007 and 2012 datasets. The experimental results demonstrate the high accuracy of the proposed model in small-object detection, in addition to a good overall accuracy. The SSD-TSEFFM achieved 80.4% mAP and 80.2% mAP on the 2007 and 2012 datasets, respectively. This indicates an average improvement of approximately 2% over other models.
Collapse
|
12
|
Yang F, Ding M, Zhang X. Non-Rigid Multi-Modal 3D Medical Image Registration Based on Foveated Modality Independent Neighborhood Descriptor. SENSORS 2019; 19:s19214675. [PMID: 31661828 PMCID: PMC6864520 DOI: 10.3390/s19214675] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/15/2019] [Revised: 10/05/2019] [Accepted: 10/23/2019] [Indexed: 11/22/2022]
Abstract
The non-rigid multi-modal three-dimensional (3D) medical image registration is highly challenging due to the difficulty in the construction of similarity measure and the solution of non-rigid transformation parameters. A novel structural representation based registration method is proposed to address these problems. Firstly, an improved modality independent neighborhood descriptor (MIND) that is based on the foveated nonlocal self-similarity is designed for the effective structural representations of 3D medical images to transform multi-modal image registration into mono-modal one. The sum of absolute differences between structural representations is computed as the similarity measure. Subsequently, the foveated MIND based spatial constraint is introduced into the Markov random field (MRF) optimization to reduce the number of transformation parameters and restrict the calculation of the energy function in the image region involving non-rigid deformation. Finally, the accurate and efficient 3D medical image registration is realized by minimizing the similarity measure based MRF energy function. Extensive experiments on 3D positron emission tomography (PET), computed tomography (CT), T1, T2, and (proton density) PD weighted magnetic resonance (MR) images with synthetic deformation demonstrate that the proposed method has higher computational efficiency and registration accuracy in terms of target registration error (TRE) than the registration methods that are based on the hybrid L-BFGS-B and cat swarm optimization (HLCSO), the sum of squared differences on entropy images, the MIND, and the self-similarity context (SSC) descriptor, except that it provides slightly bigger TRE than the HLCSO for CT-PET image registration. Experiments on real MR and ultrasound images with unknown deformation have also be done to demonstrate the practicality and superiority of the proposed method.
Collapse
Affiliation(s)
- Feng Yang
- Department of Biomedical Engineering, School of Life Science and Technology, Ministry of Education Key Laboratory of Molecular Biophysics, Huazhong University of Science and Technology, Wuhan 430074, China.
- School of Computer and Electronics and Information, Guangxi University, Nanning 530004, China.
| | - Mingyue Ding
- Department of Biomedical Engineering, School of Life Science and Technology, Ministry of Education Key Laboratory of Molecular Biophysics, Huazhong University of Science and Technology, Wuhan 430074, China.
| | - Xuming Zhang
- Department of Biomedical Engineering, School of Life Science and Technology, Ministry of Education Key Laboratory of Molecular Biophysics, Huazhong University of Science and Technology, Wuhan 430074, China.
| |
Collapse
|
13
|
Wei L, Osman S, Hatt M, El Naqa I. Machine learning for radiomics-based multimodality and multiparametric modeling. THE QUARTERLY JOURNAL OF NUCLEAR MEDICINE AND MOLECULAR IMAGING : OFFICIAL PUBLICATION OF THE ITALIAN ASSOCIATION OF NUCLEAR MEDICINE (AIMN) [AND] THE INTERNATIONAL ASSOCIATION OF RADIOPHARMACOLOGY (IAR), [AND] SECTION OF THE SOCIETY OF RADIOPHARMACEUTICAL CHEMISTRY AND BIOLOGY 2019; 63:323-338. [PMID: 31527580 DOI: 10.23736/s1824-4785.19.03213-8] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Due to the recent developments of both hardware and software technologies, multimodality medical imaging techniques have been increasingly applied in clinical practice and research studies. Previously, the application of multimodality imaging in oncology has been mainly related to combining anatomical and functional imaging to improve diagnostic specificity and/or target definition, such as positron emission tomography/computed tomography (PET/CT) and single-photon emission CT (SPECT)/CT. More recently, the fusion of various images, such as multiparametric magnetic resonance imaging (MRI) sequences, different PET tracer images, PET/MRI, has become more prevalent, which has enabled more comprehensive characterization of the tumor phenotype. In order to take advantage of these valuable multimodal data for clinical decision making using radiomics, we present two ways to implement the multimodal image analysis, namely radiomic (handcrafted feature) based and deep learning (machine learned feature) based methods. Applying advanced machine (deep) learning algorithms across multimodality images have shown better results compared with single modality modeling for prognostic and/or prediction of clinical outcomes. This holds great potentials for providing more personalized treatment for patients and achieve better outcomes.
Collapse
Affiliation(s)
- Lise Wei
- Department of Radiation Oncology, University of Michigan, Ann Arbor, MI, USA
| | - Sarah Osman
- Centre for Cancer Research and Cell Biology, Queens' University, Belfast, UK
| | - Mathieu Hatt
- LaTIM, INSERM, UMR 1101, University of Brest, Brest, France
| | - Issam El Naqa
- Department of Radiation Oncology, University of Michigan, Ann Arbor, MI, USA -
| |
Collapse
|
14
|
Morales MA, Izquierdo-Garcia D, Aganj I, Kalpathy-Cramer J, Rosen BR, Catana C. Implementation and Validation of a Three-dimensional Cardiac Motion Estimation Network. Radiol Artif Intell 2019; 1:e180080. [PMID: 32076659 PMCID: PMC6677286 DOI: 10.1148/ryai.2019180080] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2018] [Revised: 05/30/2019] [Accepted: 06/18/2019] [Indexed: 04/07/2023]
Abstract
PURPOSE To describe an unsupervised three-dimensional cardiac motion estimation network (CarMEN) for deformable motion estimation from two-dimensional cine MR images. MATERIALS AND METHODS A function was implemented using CarMEN, a convolutional neural network that takes two three-dimensional input volumes and outputs a motion field. A smoothness constraint was imposed on the field by regularizing the Frobenius norm of its Jacobian matrix. CarMEN was trained and tested with data from 150 cardiac patients who underwent MRI examinations and was validated on synthetic (n = 100) and pediatric (n = 33) datasets. CarMEN was compared to five state-of-the-art nonrigid body registration methods by using several performance metrics, including Dice similarity coefficient (DSC) and end-point error. RESULTS On the synthetic dataset, CarMEN achieved a median DSC of 0.85, which was higher than all five methods (minimum-maximum median [or MMM], 0.67-0.84; P < .001), and a median end-point error of 1.7, which was lower than (MMM, 2.1-2.7; P < .001) or similar to (MMM, 1.6-1.7; P > .05) all other techniques. On the real datasets, CarMEN achieved a median DSC of 0.73 for Automated Cardiac Diagnosis Challenge data, which was higher than (MMM, 0.33; P < .0001) or similar to (MMM, 0.72-0.75; P > .05) all other methods, and a median DSC of 0.77 for pediatric data, which was higher than (MMM, 0.71-0.76; P < .0001) or similar to (MMM, 0.77-0.78; P > .05) all other methods. All P values were derived from pairwise testing. For all other metrics, CarMEN achieved better accuracy on all datasets than all other techniques except for one, which had the worst motion estimation accuracy. CONCLUSION The proposed deep learning-based approach for three-dimensional cardiac motion estimation allowed the derivation of a motion model that balances motion characterization and image registration accuracy and achieved motion estimation accuracy comparable to or better than that of several state-of-the-art image registration algorithms.© RSNA, 2019Supplemental material is available for this article.
Collapse
Affiliation(s)
| | | | - Iman Aganj
- From the Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, 149 13th St, Charlestown, MA 02129 (M.A.M., D.I.G., I.A., J.K.C., B.R.R., C.C.); Harvard-MIT Division of Health Sciences and Technology (M.A.M.) and Computer Science and Artificial Intelligence Laboratory (I.A.), Massachusetts Institute of Technology, Cambridge, Mass
| | - Jayashree Kalpathy-Cramer
- From the Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, 149 13th St, Charlestown, MA 02129 (M.A.M., D.I.G., I.A., J.K.C., B.R.R., C.C.); Harvard-MIT Division of Health Sciences and Technology (M.A.M.) and Computer Science and Artificial Intelligence Laboratory (I.A.), Massachusetts Institute of Technology, Cambridge, Mass
| | - Bruce R. Rosen
- From the Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, 149 13th St, Charlestown, MA 02129 (M.A.M., D.I.G., I.A., J.K.C., B.R.R., C.C.); Harvard-MIT Division of Health Sciences and Technology (M.A.M.) and Computer Science and Artificial Intelligence Laboratory (I.A.), Massachusetts Institute of Technology, Cambridge, Mass
| | - Ciprian Catana
- From the Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, 149 13th St, Charlestown, MA 02129 (M.A.M., D.I.G., I.A., J.K.C., B.R.R., C.C.); Harvard-MIT Division of Health Sciences and Technology (M.A.M.) and Computer Science and Artificial Intelligence Laboratory (I.A.), Massachusetts Institute of Technology, Cambridge, Mass
| |
Collapse
|
15
|
A Review of Point Set Registration: From Pairwise Registration to Groupwise Registration. SENSORS 2019; 19:s19051191. [PMID: 30857205 PMCID: PMC6427196 DOI: 10.3390/s19051191] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/07/2018] [Revised: 02/25/2019] [Accepted: 03/05/2019] [Indexed: 01/08/2023]
Abstract
This paper presents a comprehensive literature review on point set registration. The state-of-the-art modeling methods and algorithms for point set registration are discussed and summarized. Special attention is paid to methods for pairwise registration and groupwise registration. Some of the most prominent representative methods are selected to conduct qualitative and quantitative experiments. From the experiments we have conducted on 2D and 3D data, CPD-GL pairwise registration algorithm and JRMPC groupwise registration algorithm seem to outperform their rivals both in accuracy and computational complexity. Furthermore, future research directions and avenues in the area are identified.
Collapse
|