1
|
Li J, Tuckute G, Fedorenko E, Edlow BL, Dalca AV, Fischl B. JOSA: Joint surface-based registration and atlas construction of brain geometry and function. Med Image Anal 2024; 98:103292. [PMID: 39173411 DOI: 10.1016/j.media.2024.103292] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Revised: 06/21/2024] [Accepted: 07/30/2024] [Indexed: 08/24/2024]
Abstract
Surface-based cortical registration is an important topic in medical image analysis and facilitates many downstream applications. Current approaches for cortical registration are mainly driven by geometric features, such as sulcal depth and curvature, and often assume that registration of folding patterns leads to alignment of brain function. However, functional variability of anatomically corresponding areas across subjects has been widely reported, particularly in higher-order cognitive areas. In this work, we present JOSA, a novel cortical registration framework that jointly models the mismatch between geometry and function while simultaneously learning an unbiased population-specific atlas. Using a semi-supervised training strategy, JOSA achieves superior registration performance in both geometry and function to the state-of-the-art methods but without requiring functional data at inference. This learning framework can be extended to any auxiliary data to guide spherical registration that is available during training but is difficult or impossible to obtain during inference, such as parcellations, architectonic identity, transcriptomic information, and molecular profiles. By recognizing the mismatch between geometry and function, JOSA provides new insights into the future development of registration methods using joint analysis of brain structure and function.
Collapse
Affiliation(s)
- Jian Li
- A. A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, United States of America; Center for Neurotechnology and Neurorecovery, Department of Neurology, Massachusetts General Hospital and Harvard Medical School, United States of America.
| | - Greta Tuckute
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, United States of America; McGovern Institute for Brain Research, Massachusetts Institute of Technology, United States of America
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, United States of America; McGovern Institute for Brain Research, Massachusetts Institute of Technology, United States of America; Program in Speech Hearing Bioscience and Technology, Harvard University, United States of America
| | - Brian L Edlow
- A. A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, United States of America; Center for Neurotechnology and Neurorecovery, Department of Neurology, Massachusetts General Hospital and Harvard Medical School, United States of America
| | - Adrian V Dalca
- A. A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, United States of America; Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, United States of America
| | - Bruce Fischl
- A. A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, United States of America; Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, United States of America
| |
Collapse
|
2
|
Sun S, Han K, You C, Tang H, Kong D, Naushad J, Yan X, Ma H, Khosravi P, Duncan JS, Xie X. Medical image registration via neural fields. Med Image Anal 2024; 97:103249. [PMID: 38963972 DOI: 10.1016/j.media.2024.103249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 05/24/2024] [Accepted: 06/21/2024] [Indexed: 07/06/2024]
Abstract
Image registration is an essential step in many medical image analysis tasks. Traditional methods for image registration are primarily optimization-driven, finding the optimal deformations that maximize the similarity between two images. Recent learning-based methods, trained to directly predict transformations between two images, run much faster, but suffer from performance deficiencies due to domain shift. Here we present a new neural network based image registration framework, called NIR (Neural Image Registration), which is based on optimization but utilizes deep neural networks to model deformations between image pairs. NIR represents the transformation between two images with a continuous function implemented via neural fields, receiving a 3D coordinate as input and outputting the corresponding deformation vector. NIR provides two ways of generating deformation field: directly output a displacement vector field for general deformable registration, or output a velocity vector field and integrate the velocity field to derive the deformation field for diffeomorphic image registration. The optimal registration is discovered by updating the parameters of the neural field via stochastic mini-batch gradient descent. We describe several design choices that facilitate model optimization, including coordinate encoding, sinusoidal activation, coordinate sampling, and intensity sampling. NIR is evaluated on two 3D MR brain scan datasets, demonstrating highly competitive performance in terms of both registration accuracy and regularity. Compared to traditional optimization-based methods, our approach achieves better results in shorter computation times. In addition, our methods exhibit performance on a cross-dataset registration task, compared to the pre-trained learning-based methods.
Collapse
Affiliation(s)
- Shanlin Sun
- University of California, Irvine, Irvine, CA 92697, USA.
| | - Kun Han
- University of California, Irvine, Irvine, CA 92697, USA.
| | - Chenyu You
- Yale University, New Haven, CT 06520, USA.
| | - Hao Tang
- University of California, Irvine, Irvine, CA 92697, USA.
| | - Deying Kong
- University of California, Irvine, Irvine, CA 92697, USA.
| | | | - Xiangyi Yan
- University of California, Irvine, Irvine, CA 92697, USA.
| | - Haoyu Ma
- University of California, Irvine, Irvine, CA 92697, USA.
| | - Pooya Khosravi
- University of California, Irvine, Irvine, CA 92697, USA.
| | | | - Xiaohui Xie
- University of California, Irvine, Irvine, CA 92697, USA.
| |
Collapse
|
3
|
Zhu Z, Li Q, Wei Y, Song R. Hierarchical multi-level dynamic hyperparameter deformable image registration with convolutional neural network. Phys Med Biol 2024; 69:175007. [PMID: 39053510 DOI: 10.1088/1361-6560/ad67a6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Accepted: 07/25/2024] [Indexed: 07/27/2024]
Abstract
Objective. To enable the registration network to be trained only once, achieving fast regularization hyperparameter selection during the inference phase, and to improve registration accuracy and deformation field regularity.Approach. Hyperparameter tuning is an essential process for deep learning deformable image registration (DLDIR). Most DLDIR methods usually perform a large number of independent experiments to select the appropriate regularization hyperparameters, which are time-consuming and resource-consuming. To address this issue, we propose a novel dynamic hyperparameter block, which comprises a distributed mapping network, dynamic convolution, attention feature extraction layer, and instance normalization layer. The dynamic hyperparameter block encodes the input feature vectors and regularization hyperparameters into learnable feature variables and dynamic convolution parameters which changes the feature statistics of the high-dimensional features layer feature variables, respectively. In addition, the proposed method replaced the single-level structure residual blocks in LapIRN with a hierarchical multi-level architecture for the dynamic hyperparameter block in order to improve registration performance.Main results. On the OASIS dataset, the proposed method reduced the percentage of|Jϕ|⩽0by 28.01%, 9.78%and improved Dice similarity coefficient by 1.17%, 1.17%, compared with LapIRN and CIR, respectively. On the DIR-Lab dataset, the proposed method reduced the percentage of|Jϕ|⩽0by 10.00%, 5.70%and reduced target registration error by 10.84%, 10.05%, compared with LapIRN and CIR, respectively.Significance. The proposed method can fast achieve the corresponding registration deformation field for arbitrary hyperparameter value during the inference phase. Extensive experiments demonstrate that the proposed method reduces training time compared to DLDIR with fixed regularization hyperparameters while outperforming the state-of-the-art registration methods concerning registration accuracy and deformation smoothness on brain dataset OASIS and lung dataset DIR-Lab.
Collapse
Affiliation(s)
- Zhenyu Zhu
- School of Control Science and Engineering, Shandong University, Jinan, People's Republic of China
| | - Qianqian Li
- School and Hospital of Stomatology, Cheeloo College of Medicine, Shandong University, Jinan, People's Republic of China
| | - Ying Wei
- School of Control Science and Engineering, Shandong University, Jinan, People's Republic of China
- Shandong Research Institute of Industrial Technology, Jinan, People's Republic of China
| | - Rui Song
- School of Control Science and Engineering, Shandong University, Jinan, People's Republic of China
- Shandong Research Institute of Industrial Technology, Jinan, People's Republic of China
| |
Collapse
|
4
|
Hoffmann M, Hoopes A, Greve DN, Fischl B, Dalca AV. Anatomy-aware and acquisition-agnostic joint registration with SynthMorph. IMAGING NEUROSCIENCE (CAMBRIDGE, MASS.) 2024; 2:1-33. [PMID: 39015335 PMCID: PMC11247402 DOI: 10.1162/imag_a_00197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 04/27/2024] [Accepted: 05/21/2024] [Indexed: 07/18/2024]
Abstract
Affine image registration is a cornerstone of medical-image analysis. While classical algorithms can achieve excellent accuracy, they solve a time-consuming optimization for every image pair. Deep-learning (DL) methods learn a function that maps an image pair to an output transform. Evaluating the function is fast, but capturing large transforms can be challenging, and networks tend to struggle if a test-image characteristic shifts from the training domain, such as the resolution. Most affine methods are agnostic to the anatomy the user wishes to align, meaning the registration will be inaccurate if algorithms consider all structures in the image. We address these shortcomings with SynthMorph, a fast, symmetric, diffeomorphic, and easy-to-use DL tool for joint affine-deformable registration of any brain image without preprocessing. First, we leverage a strategy that trains networks with widely varying images synthesized from label maps, yielding robust performance across acquisition specifics unseen at training. Second, we optimize the spatial overlap of select anatomical labels. This enables networks to distinguish anatomy of interest from irrelevant structures, removing the need for preprocessing that excludes content which would impinge on anatomy-specific registration. Third, we combine the affine model with a deformable hypernetwork that lets users choose the optimal deformation-field regularity for their specific data, at registration time, in a fraction of the time required by classical methods. This framework is applicable to learning anatomy-aware, acquisition-agnostic registration of any anatomy with any architecture, as long as label maps are available for training. We analyze how competing architectures learn affine transforms and compare state-of-the-art registration tools across an extremely diverse set of neuroimaging data, aiming to truly capture the behavior of methods in the real world. SynthMorph demonstrates high accuracy and is available at https://w3id.org/synthmorph, as a single complete end-to-end solution for registration of brain magnetic resonance imaging (MRI) data.
Collapse
Affiliation(s)
- Malte Hoffmann
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, United States
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
- Department of Radiology, Harvard Medical School, Boston, MA, United States
| | - Andrew Hoopes
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, United States
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
- Computer Science & Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Douglas N. Greve
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, United States
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
- Department of Radiology, Harvard Medical School, Boston, MA, United States
| | - Bruce Fischl
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, United States
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
- Department of Radiology, Harvard Medical School, Boston, MA, United States
- Computer Science & Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Adrian V. Dalca
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, United States
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
- Department of Radiology, Harvard Medical School, Boston, MA, United States
- Computer Science & Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, United States
| |
Collapse
|
5
|
Cao YH, Bourbonne V, Lucia F, Schick U, Bert J, Jaouen V, Visvikis D. CT respiratory motion synthesis using joint supervised and adversarial learning. Phys Med Biol 2024; 69:095001. [PMID: 38537289 DOI: 10.1088/1361-6560/ad388a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Accepted: 03/27/2024] [Indexed: 04/16/2024]
Abstract
Objective.Four-dimensional computed tomography (4DCT) imaging consists in reconstructing a CT acquisition into multiple phases to track internal organ and tumor motion. It is commonly used in radiotherapy treatment planning to establish planning target volumes. However, 4DCT increases protocol complexity, may not align with patient breathing during treatment, and lead to higher radiation delivery.Approach.In this study, we propose a deep synthesis method to generate pseudo respiratory CT phases from static images for motion-aware treatment planning. The model produces patient-specific deformation vector fields (DVFs) by conditioning synthesis on external patient surface-based estimation, mimicking respiratory monitoring devices. A key methodological contribution is to encourage DVF realism through supervised DVF training while using an adversarial term jointly not only on the warped image but also on the magnitude of the DVF itself. This way, we avoid excessive smoothness typically obtained through deep unsupervised learning, and encourage correlations with the respiratory amplitude.Main results.Performance is evaluated using real 4DCT acquisitions with smaller tumor volumes than previously reported. Results demonstrate for the first time that the generated pseudo-respiratory CT phases can capture organ and tumor motion with similar accuracy to repeated 4DCT scans of the same patient. Mean inter-scans tumor center-of-mass distances and Dice similarity coefficients were 1.97 mm and 0.63, respectively, for real 4DCT phases and 2.35 mm and 0.71 for synthetic phases, and compares favorably to a state-of-the-art technique (RMSim).Significance.This study presents a deep image synthesis method that addresses the limitations of conventional 4DCT by generating pseudo-respiratory CT phases from static images. Although further studies are needed to assess the dosimetric impact of the proposed method, this approach has the potential to reduce radiation exposure in radiotherapy treatment planning while maintaining accurate motion representation. Our training and testing code can be found athttps://github.com/cyiheng/Dynagan.
Collapse
Affiliation(s)
- Y-H Cao
- LaTIM, UMR Inserm 1101, Université de Bretagne Occidentale, IMT Atlantique, Brest, France
| | - V Bourbonne
- LaTIM, UMR Inserm 1101, Université de Bretagne Occidentale, IMT Atlantique, Brest, France
- CHRU Brest University Hospital, Brest, France
| | - F Lucia
- LaTIM, UMR Inserm 1101, Université de Bretagne Occidentale, IMT Atlantique, Brest, France
- CHRU Brest University Hospital, Brest, France
| | - U Schick
- LaTIM, UMR Inserm 1101, Université de Bretagne Occidentale, IMT Atlantique, Brest, France
- CHRU Brest University Hospital, Brest, France
| | - J Bert
- LaTIM, UMR Inserm 1101, Université de Bretagne Occidentale, IMT Atlantique, Brest, France
- CHRU Brest University Hospital, Brest, France
| | - V Jaouen
- LaTIM, UMR Inserm 1101, Université de Bretagne Occidentale, IMT Atlantique, Brest, France
- IMT Atlantique, Brest, France
| | - D Visvikis
- LaTIM, UMR Inserm 1101, Université de Bretagne Occidentale, IMT Atlantique, Brest, France
| |
Collapse
|
6
|
Brezovec BE, Berger AB, Hao YA, Chen F, Druckmann S, Clandinin TR. Mapping the neural dynamics of locomotion across the Drosophila brain. Curr Biol 2024; 34:710-726.e4. [PMID: 38242122 DOI: 10.1016/j.cub.2023.12.063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Revised: 11/13/2023] [Accepted: 12/20/2023] [Indexed: 01/21/2024]
Abstract
Locomotion engages widely distributed networks of neurons. However, our understanding of the spatial architecture and temporal dynamics of the networks that underpin walking remains incomplete. We use volumetric two-photon imaging to map neural activity associated with walking across the entire brain of Drosophila. We define spatially clustered neural signals selectively associated with changes in either forward or angular velocity, demonstrating that neurons with similar behavioral selectivity are clustered. These signals reveal distinct topographic maps in diverse brain regions involved in navigation, memory, sensory processing, and motor control, as well as regions not previously linked to locomotion. We identify temporal trajectories of neural activity that sweep across these maps, including signals that anticipate future movement, representing the sequential engagement of clusters with different behavioral specificities. Finally, we register these maps to a connectome and identify neural networks that we propose underlie the observed signals, setting a foundation for subsequent circuit dissection. Overall, our work suggests a spatiotemporal framework for the emergence and execution of complex walking maneuvers and links this brain-wide neural activity to single neurons and local circuits.
Collapse
Affiliation(s)
- Bella E Brezovec
- Department of Neurobiology, Stanford University, Fairchild D200, 299 W. Campus Drive, Stanford, CA 94305, USA
| | - Andrew B Berger
- Department of Neurobiology, Stanford University, Fairchild D200, 299 W. Campus Drive, Stanford, CA 94305, USA
| | - Yukun A Hao
- Department of Neurobiology, Stanford University, Fairchild D200, 299 W. Campus Drive, Stanford, CA 94305, USA
| | - Feng Chen
- Department of Neurobiology, Stanford University, Fairchild D200, 299 W. Campus Drive, Stanford, CA 94305, USA
| | - Shaul Druckmann
- Department of Neurobiology, Stanford University, Fairchild D200, 299 W. Campus Drive, Stanford, CA 94305, USA
| | - Thomas R Clandinin
- Department of Neurobiology, Stanford University, Fairchild D200, 299 W. Campus Drive, Stanford, CA 94305, USA.
| |
Collapse
|
7
|
Zhou G, Tward D, Lange K. A Majorization-Minimization Algorithm for Neuroimage Registration. SIAM JOURNAL ON IMAGING SCIENCES 2024; 17:273-300. [PMID: 38550750 PMCID: PMC10977051 DOI: 10.1137/22m1516907] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/01/2024]
Abstract
Intensity-based image registration is critical for neuroimaging tasks, such as 3D reconstruction, times-series alignment, and common coordinate mapping. The gradient-based optimization methods commonly used to solve this problem require a careful selection of step-length. This limitation imposes substantial time and computational costs. Here we propose a gradient-independent rigid-motion registration algorithm based on the majorization-minimization (MM) principle. Each iteration of our intensity-based MM algorithm reduces to a simple point-set rigid registration problem with a closed form solution that avoids the step-length issue altogether. The details of the algorithm are presented, and an error bound for its more practical truncated form is derived. The performance of the MM algorithm is shown to be more effective than gradient descent on simulated images and Nissl stained coronal slices of mouse brain. We also compare and contrast the similarities and differences between the MM algorithm and another gradient-free registration algorithm called the block-matching method. Finally, extensions of this algorithm to more complex problems are discussed.
Collapse
Affiliation(s)
- Gaiting Zhou
- Computational Medicine, UCLA, Los Angeles, CA 90024 USA
| | - Daniel Tward
- Computational Medicine, UCLA, Los Angeles, CA 90024 USA
| | - Kenneth Lange
- Computational Medicine, UCLA, Los Angeles, CA 90024 USA
| |
Collapse
|
8
|
Wang AQ, Yu EM, Dalca AV, Sabuncu MR. A robust and interpretable deep learning framework for multi-modal registration via keypoints. Med Image Anal 2023; 90:102962. [PMID: 37769550 PMCID: PMC10591968 DOI: 10.1016/j.media.2023.102962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 08/24/2023] [Accepted: 09/07/2023] [Indexed: 10/03/2023]
Abstract
We present KeyMorph, a deep learning-based image registration framework that relies on automatically detecting corresponding keypoints. State-of-the-art deep learning methods for registration often are not robust to large misalignments, are not interpretable, and do not incorporate the symmetries of the problem. In addition, most models produce only a single prediction at test-time. Our core insight which addresses these shortcomings is that corresponding keypoints between images can be used to obtain the optimal transformation via a differentiable closed-form expression. We use this observation to drive the end-to-end learning of keypoints tailored for the registration task, and without knowledge of ground-truth keypoints. This framework not only leads to substantially more robust registration but also yields better interpretability, since the keypoints reveal which parts of the image are driving the final alignment. Moreover, KeyMorph can be designed to be equivariant under image translations and/or symmetric with respect to the input image ordering. Finally, we show how multiple deformation fields can be computed efficiently and in closed-form at test time corresponding to different transformation variants. We demonstrate the proposed framework in solving 3D affine and spline-based registration of multi-modal brain MRI scans. In particular, we show registration accuracy that surpasses current state-of-the-art methods, especially in the context of large displacements. Our code is available at https://github.com/alanqrwang/keymorph.
Collapse
Affiliation(s)
- Alan Q Wang
- School of Electrical and Computer Engineering, Cornell University and Cornell Tech, New York, NY 10044, USA; Department of Radiology, Weill Cornell Medical School, New York, NY 10065, USA.
| | - Evan M Yu
- Iterative Scopes, Cambridge, MA 02139, USA
| | - Adrian V Dalca
- Computer Science and Artificial Intelligence Lab at the Massachusetts Institute of Technology, Cambridge, MA 02139, USA; A.A. Martinos Center for Biomedical Imaging at the Massachusetts General Hospital, Charlestown, MA 02129, USA
| | - Mert R Sabuncu
- School of Electrical and Computer Engineering, Cornell University and Cornell Tech, New York, NY 10044, USA; Department of Radiology, Weill Cornell Medical School, New York, NY 10065, USA
| |
Collapse
|
9
|
Schira MM, Isherwood ZJ, Kassem MS, Barth M, Shaw TB, Roberts MM, Paxinos G. HumanBrainAtlas: an in vivo MRI dataset for detailed segmentations. Brain Struct Funct 2023; 228:1849-1863. [PMID: 37277567 PMCID: PMC10516788 DOI: 10.1007/s00429-023-02653-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Accepted: 05/13/2023] [Indexed: 06/07/2023]
Abstract
We introduce HumanBrainAtlas, an initiative to construct a highly detailed, open-access atlas of the living human brain that combines high-resolution in vivo MR imaging and detailed segmentations previously possible only in histological preparations. Here, we present and evaluate the first step of this initiative: a comprehensive dataset of two healthy male volunteers reconstructed to a 0.25 mm isotropic resolution for T1w, T2w, and DWI contrasts. Multiple high-resolution acquisitions were collected for each contrast and each participant, followed by averaging using symmetric group-wise normalisation (Advanced Normalisation Tools). The resulting image quality permits structural parcellations rivalling histology-based atlases, while maintaining the advantages of in vivo MRI. For example, components of the thalamus, hypothalamus, and hippocampus are often impossible to identify using standard MRI protocols-can be identified within the present data. Our data are virtually distortion free, fully 3D, and compatible with the existing in vivo Neuroimaging analysis tools. The dataset is suitable for teaching and is publicly available via our website (hba.neura.edu.au), which also provides data processing scripts. Instead of focusing on coordinates in an averaged brain space, our approach focuses on providing an example segmentation at great detail in the high-quality individual brain. This serves as an illustration on what features contrasts and relations can be used to interpret MRI datasets, in research, clinical, and education settings.
Collapse
Affiliation(s)
- Mark M Schira
- School of Psychology, University of Wollongong, Wollongong, NSW, 2522, Australia.
- Neuroscience Research Australia, Randwick, NSW, 2031, Australia.
| | - Zoey J Isherwood
- School of Psychology, University of Wollongong, Wollongong, NSW, 2522, Australia
- Department of Psychology, University of Nevada, Reno, NV, 89557, USA
| | - Mustafa S Kassem
- Neuroscience Research Australia, Randwick, NSW, 2031, Australia
- School of Psychology, The University of New South Wales, Sydney, NSW, 2052, Australia
| | - Markus Barth
- Centre for Advanced Imaging, The University of Queensland, St Lucia, QLD, 4067, Australia
- School of Information Technology and Electrical Engineering, The University of Queensland, Brisbane, QLD, 7067, Australia
| | - Thomas B Shaw
- Centre for Advanced Imaging, The University of Queensland, St Lucia, QLD, 4067, Australia
- School of Information Technology and Electrical Engineering, The University of Queensland, Brisbane, QLD, 7067, Australia
| | - Michelle M Roberts
- School of Psychology, University of Wollongong, Wollongong, NSW, 2522, Australia
- School of Psychology, The University of New South Wales, Sydney, NSW, 2052, Australia
| | - George Paxinos
- Neuroscience Research Australia, Randwick, NSW, 2031, Australia
- School of Psychology, The University of New South Wales, Sydney, NSW, 2052, Australia
| |
Collapse
|
10
|
Sang Y, McNitt-Gray M, Yang Y, Cao M, Low D, Ruan D. Target-oriented deep learning-based image registration with individualized test-time adaptation. Med Phys 2023; 50:7016-7026. [PMID: 37222565 DOI: 10.1002/mp.16477] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2022] [Revised: 02/20/2023] [Accepted: 02/26/2023] [Indexed: 05/25/2023] Open
Abstract
BACKGROUND A classic approach in medical image registration is to formulate an optimization problem based on the image pair of interest, and seek a deformation vector field (DVF) to minimize the corresponding objective, often iteratively. It has a clear focus on the targeted pair, but is typically slow. In contrast, more recent deep-learning-based registration offers a much faster alternative and can benefit from data-driven regularization. However, learning is a process to "fit" the training cohort, whose image or motion characteristics or both may differ from the pair of images to be tested, which is the ultimate goal of registration. Therefore, generalization gap poses a high risk with direct inference alone. PURPOSE In this study, we propose an individualized adaptation to improve test sample targeting, to achieve a synergy of efficiency and performance in registration. METHODS Using a previously developed network with an integrated motion representation prior module as the implementation backbone, we propose to adapt the trained registration network further for image pairs at test time to optimize the individualized performance. The adaptation method was tested against various characteristics shifts caused by cross-protocol, cross-platform, and cross-modality, with test evaluation performed on lung CBCT, cardiac MRI, and lung MRI, respectively. RESULTS Landmark-based registration errors and motion-compensated image enhancement results demonstrated significantly improved test registration performance from our method, compared to tuned classic B-spline registration and network solutions without adaptation. CONCLUSIONS We have developed a method to synergistically combine the effectiveness of pre-trained deep network and the target-centric perspective of optimization-based registration to improve performance on individual test data.
Collapse
Affiliation(s)
- Yudi Sang
- Department of Bioengineering, University of California, Los Angeles, California, USA
- Department of Radiation Oncology, University of California, Los Angeles, California, USA
| | - Michael McNitt-Gray
- Department of Radiology, University of California, Los Angeles, California, USA
| | - Yingli Yang
- Department of Radiation Oncology, University of California, Los Angeles, California, USA
| | - Minsong Cao
- Department of Radiation Oncology, University of California, Los Angeles, California, USA
| | - Daniel Low
- Department of Radiation Oncology, University of California, Los Angeles, California, USA
| | - Dan Ruan
- Department of Bioengineering, University of California, Los Angeles, California, USA
- Department of Radiation Oncology, University of California, Los Angeles, California, USA
| |
Collapse
|
11
|
Cece E, Meyrat P, Torino E, Verdier O, Colarieti-Tosti M. Spatio-Temporal Positron Emission Tomography Reconstruction with Attenuation and Motion Correction. J Imaging 2023; 9:231. [PMID: 37888338 PMCID: PMC10607376 DOI: 10.3390/jimaging9100231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 10/02/2023] [Accepted: 10/11/2023] [Indexed: 10/28/2023] Open
Abstract
The detection of cancer lesions of a comparable size to that of the typical system resolution of modern scanners is a long-standing problem in Positron Emission Tomography. In this paper, the effect of composing an image-registering convolutional neural network with the modeling of the static data acquisition (i.e., the forward model) is investigated. Two algorithms for Positron Emission Tomography reconstruction with motion and attenuation correction are proposed and their performance is evaluated in the detectability of small pulmonary lesions. The evaluation is performed on synthetic data with respect to chosen figures of merit, visual inspection, and an ideal observer. The commonly used figures of merit-Peak Signal-to-Noise Ratio, Recovery Coefficient, and Signal Difference-to-Noise Ration-give inconclusive responses, whereas visual inspection and the Channelised Hotelling Observer suggest that the proposed algorithms outperform current clinical practice.
Collapse
Affiliation(s)
- Enza Cece
- Department of Biomedical Engineering and Health Systems, KTH Royal Institute of Technology, 10044 Stockholm, Sweden; (E.C.); (P.M.)
- Deptartment of Chemical Engineering, Materials and Production, University of Naples Federico II, 80131 Naples, Italy;
| | - Pierre Meyrat
- Department of Biomedical Engineering and Health Systems, KTH Royal Institute of Technology, 10044 Stockholm, Sweden; (E.C.); (P.M.)
| | - Enza Torino
- Deptartment of Chemical Engineering, Materials and Production, University of Naples Federico II, 80131 Naples, Italy;
| | - Olivier Verdier
- Department of Computing, Mathematics, and Physics, HVL Western Norway University of Applied Sciences, 5063 Bergen, Norway;
| | - Massimiliano Colarieti-Tosti
- Department of Biomedical Engineering and Health Systems, KTH Royal Institute of Technology, 10044 Stockholm, Sweden; (E.C.); (P.M.)
- Department of Clinical Science, Intervention & Technology, Karolinska Institutet, 171 77 Stockholm, Sweden
| |
Collapse
|
12
|
Byra M, Poon C, Rachmadi MF, Schlachter M, Skibbe H. Exploring the performance of implicit neural representations for brain image registration. Sci Rep 2023; 13:17334. [PMID: 37833464 PMCID: PMC10575995 DOI: 10.1038/s41598-023-44517-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Accepted: 10/09/2023] [Indexed: 10/15/2023] Open
Abstract
Pairwise image registration is a necessary prerequisite for brain image comparison and data integration in neuroscience and radiology. In this work, we explore the efficacy of implicit neural representations (INRs) in improving the performance of brain image registration in magnetic resonance imaging. In this setting, INRs serve as a continuous and coordinate based approximation of the deformation field obtained through a multi-layer perceptron. Previous research has demonstrated that sinusoidal representation networks (SIRENs) surpass ReLU models in performance. In this study, we first broaden the range of activation functions to further investigate the registration performance of implicit networks equipped with activation functions that exhibit diverse oscillatory properties. Specifically, in addition to the SIRENs and ReLU, we evaluate activation functions based on snake, sine+, chirp and Morlet wavelet functions. Second, we conduct experiments to relate the hyper-parameters of the models to registration performance. Third, we propose and assess various techniques, including cycle consistency loss, ensembles and cascades of implicit networks, as well as a combined image fusion and registration objective, to enhance the performance of implicit registration networks beyond the standard approach. The investigated implicit methods are compared to the VoxelMorph convolutional neural network and to the symmetric image normalization (SyN) registration algorithm from the Advanced Normalization Tools (ANTs). Our findings not only highlight the remarkable capabilities of implicit networks in addressing pairwise image registration challenges, but also showcase their potential as a powerful and versatile off-the-shelf tool in the fields of neuroscience and radiology.
Collapse
Affiliation(s)
- Michal Byra
- RIKEN Center for Brain Science, Wako, Japan.
- Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland.
| | | | - Muhammad Febrian Rachmadi
- RIKEN Center for Brain Science, Wako, Japan
- Faculty of Computer Science, University of Indonesia, Depok, Indonesia
| | | | | |
Collapse
|
13
|
Wang T, Liu X, Zhang C, He Y, Chan Y, Xie Y, Liang X. Ring artifacts correction for computed tomography image using unsupervised contrastive learning. Phys Med Biol 2023; 68:205008. [PMID: 37714184 DOI: 10.1088/1361-6560/acfa60] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Accepted: 09/15/2023] [Indexed: 09/17/2023]
Abstract
Objective.Computed tomography (CT) is a widely employed imaging technology for disease detection. However, CT images often suffer from ring artifacts, which may result from hardware defects and other factors. These artifacts compromise image quality and impede diagnosis. To address this challenge, we propose a novel method based on dual contrast learning image style transformation network model (DCLGAN) that effectively eliminates ring artifacts from CT images while preserving texture details.Approach. Our method involves simulating ring artifacts on real CT data to generate the uncorrected CT (uCT) data and transforming them into strip artifacts. Subsequently, the DCLGAN synthetic network is applied in the polar coordinate system to remove the strip artifacts and generate a synthetic CT (sCT). We compare the uCT and sCT images to obtain a residual image, which is then filtered to extract the strip artifacts. An inverse polar transformation is performed to obtain the ring artifacts, which are subtracted from the original CT image to produce a corrected image.Main results.To validate the effectiveness of our approach, we tested it using real CT data, simulated data, and cone beam computed tomography images of the patient's brain. The corrected CT images showed a reduction in mean absolute error by 12.36 Hounsfield units (HU), a decrease in root mean square error by 18.94 HU, an increase in peak signal-to-noise ratio by 3.53 decibels (dB), and an improvement in structural similarity index by 9.24%.Significance.These results demonstrate the efficacy of our method in eliminating ring artifacts and preserving image details, making it a valuable tool for CT imaging.
Collapse
Affiliation(s)
- Tangsheng Wang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, People's Republic of China
- University of Chinese Academy of Sciences, Beijing 100190, People's Republic of China
| | - Xuan Liu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, People's Republic of China
| | - Chulong Zhang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, People's Republic of China
| | - Yutong He
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, People's Republic of China
| | - Yinping Chan
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, People's Republic of China
| | - Yaoqin Xie
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, People's Republic of China
| | - Xiaokun Liang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, People's Republic of China
| |
Collapse
|
14
|
Joshi A, Hong Y. R2Net: Efficient and flexible diffeomorphic image registration using Lipschitz continuous residual networks. Med Image Anal 2023; 89:102917. [PMID: 37598607 DOI: 10.1016/j.media.2023.102917] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 06/26/2023] [Accepted: 07/25/2023] [Indexed: 08/22/2023]
Abstract
Classical diffeomorphic image registration methods, while being accurate, face the challenges of high computational costs. Deep learning based approaches provide a fast alternative to address these issues; however, most existing deep solutions either lose the good property of diffeomorphism or have limited flexibility to capture large deformations, under the assumption that deformations are driven by stationary velocity fields (SVFs). Also, the adopted squaring and scaling technique for integrating SVFs is time- and memory-consuming, hindering deep methods from handling large image volumes. In this paper, we present an unsupervised diffeomorphic image registration framework, which uses deep residual networks (ResNets) as numerical approximations of the underlying continuous diffeomorphic setting governed by ordinary differential equations, which is parameterized by either SVFs or time-varying (non-stationary) velocity fields. This flexible parameterization in our Residual Registration Network (R2Net) not only provides the model's ability to capture large deformation but also reduces the time and memory cost when integrating velocity fields for deformation generation. Also, we introduce a Lipschitz continuity constraint into the ResNet block to help achieve diffeomorphic deformations. To enhance the ability of our model for handling images with large volume sizes, we employ a hierarchical extension with a multi-phase learning strategy to solve the image registration task in a coarse-to-fine fashion. We demonstrate our models on four 3D image registration tasks with a wide range of anatomies, including brain MRIs, cine cardiac MRIs, and lung CT scans. Compared to classical methods SyN and diffeomorphic VoxelMorph, our models achieve comparable or better registration accuracy with much smoother deformations. Our source code is available online at https://github.com/ankitajoshi15/R2Net.
Collapse
Affiliation(s)
- Ankita Joshi
- School of Computing, University of Georgia, Athens, 30602, USA
| | - Yi Hong
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China.
| |
Collapse
|
15
|
Liu C, Li T, Cao P, Hui ES, Wong YL, Wang Z, Xiao H, Zhi S, Zhou T, Li W, Lam SK, Cheung ALY, Lee VHF, Ying M, Cai J. Respiratory-Correlated 4-Dimensional Magnetic Resonance Fingerprinting for Liver Cancer Radiation Therapy Motion Management. Int J Radiat Oncol Biol Phys 2023; 117:493-504. [PMID: 37116591 DOI: 10.1016/j.ijrobp.2023.04.015] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Revised: 04/04/2023] [Accepted: 04/18/2023] [Indexed: 04/30/2023]
Abstract
PURPOSE The objective of this study was to develop a respiratory-correlated (RC) 4-dimensional (4D) imaging technique based on magnetic resonance fingerprinting (MRF) (RC-4DMRF) for liver tumor motion management in radiation therapy. METHODS AND MATERIALS Thirteen patients with liver cancer were prospectively enrolled in this study. k-space MRF signals of the liver were acquired during free-breathing using the fast acquisition with steady-state precession sequence on a 3T scanner. The signals were binned into 8 respiratory phases based on respiratory surrogates, and interphase displacement vector fields were estimated using a phase-specific low-rank optimization method. Hereafter, the tissue property maps, including T1 and T2 relaxation times, and proton density, were reconstructed using a pyramid motion-compensated method that alternatively optimized interphase displacement vector fields and subspace images. To evaluate the efficacy of RC-4DMRF, amplitude motion differences and Pearson correlation coefficients were determined to assess measurement agreement in tumor motion between RC-4DMRF and cine magnetic resonance imaging (MRI); mean absolute percentage errors of the RC-4DMRF-derived tissue maps were calculated to reveal tissue quantification accuracy using digital human phantom; and tumor-to-liver contrast-to-noise ratio of RC-4DMRF images was compared with that of planning CT and contrast-enhanced MRI (CE-MRI) images. A paired Student t test was used for statistical significance analysis with a P value threshold of .05. RESULTS RC-4DMRF achieved excellent agreement in motion measurement with cine MRI, yielding the mean (± standard deviation) Pearson correlation coefficients of 0.95 ± 0.05 and 0.93 ± 0.09 and amplitude motion differences of 1.48 ± 1.06 mm and 0.81 ± 0.64 mm in the superior-inferior and anterior-posterior directions, respectively. Moreover, RC-4DMRF achieved high accuracy in tissue property quantification, with mean absolute percentage errors of 8.8%, 9.6%, and 5.0% for T1, T2, and proton density, respectively. Notably, the tumor contrast-to-noise ratio in RC-4DMRI-derived T1 maps (6.41 ± 3.37) was found to be the highest among all tissue property maps, approximately equal to that of CE-MRI (6.96 ± 1.01, P = .862), and substantially higher than that of planning CT (2.91 ± 1.97, P = .048). CONCLUSIONS RC-4DMRF demonstrated high accuracy in respiratory motion measurement and tissue properties quantification, potentially facilitating tumor motion management in liver radiation therapy.
Collapse
Affiliation(s)
- Chenyang Liu
- Department of Health Technology and Informatics, Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Tian Li
- Department of Health Technology and Informatics, Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Peng Cao
- Department of Diagnostic Radiology, University of Hong Kong, Hong Kong SAR, China
| | - Edward S Hui
- Department of Imaging and Interventional Radiology, Chinese University of Hong Kong, Hong Kong SAR, China; Department of Psychiatry, Chinese University of Hong Kong, Hong Kong SAR, China
| | - Yat-Lam Wong
- Department of Health Technology and Informatics, Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Zuojun Wang
- Department of Diagnostic Radiology, University of Hong Kong, Hong Kong SAR, China
| | - Haonan Xiao
- Department of Health Technology and Informatics, Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Shaohua Zhi
- Department of Health Technology and Informatics, Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Ta Zhou
- Department of Health Technology and Informatics, Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Wen Li
- Department of Health Technology and Informatics, Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Sai Kit Lam
- Department of Biomedical Engineering, Hong Kong Polytechnic University, Hong Kong SAR, China; Research Institute for Smart Ageing, Hong Kong Polytechnic University, Hong Kong SAR, China
| | | | - Victor Ho-Fun Lee
- Department of Clinical Oncology, University of Hong Kong, Hong Kong SAR, China
| | - Michael Ying
- Department of Health Technology and Informatics, Hong Kong Polytechnic University, Hong Kong SAR, China.
| | - Jing Cai
- Department of Health Technology and Informatics, Hong Kong Polytechnic University, Hong Kong SAR, China; Research Institute for Smart Ageing, Hong Kong Polytechnic University, Hong Kong SAR, China.
| |
Collapse
|
16
|
Yang Y, Hu S, Zhang L, Shen D. Deep learning based brain MRI registration driven by local-signed-distance fields of segmentation maps. Med Phys 2023; 50:4899-4915. [PMID: 36880373 DOI: 10.1002/mp.16291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Revised: 12/21/2022] [Accepted: 01/16/2023] [Indexed: 03/08/2023] Open
Abstract
BACKGROUND Deep learning based unsupervised registration utilizes the intensity information to align images. To avoid the influence of intensity variation and improve the registration accuracy, unsupervised and weakly-supervised registration are combined, namely, dually-supervised registration. However, the estimated dense deformation fields (DDFs) will focus on the edges among adjacent tissues when the segmentation labels are directly used to drive the registration progress, which will decrease the plausibility of brain MRI registration. PURPOSE In order to increase the accuracy of registration and ensure the plausibility of registration at the same time, we combine the local-signed-distance fields (LSDFs) and intensity images to dually supervise the registration progress. The proposed method not only uses the intensity and segmentation information but also uses the voxelwise geometric distance information to the edges. Hence, the accurate voxelwise correspondence relationships are guaranteed both inside and outside the edges. METHODS The proposed dually-supervised registration method mainly includes three enhancement strategies. Firstly, we leverage the segmentation labels to construct their LSDFs to provide more geometrical information for guiding the registration process. Secondly, to calculate LSDFs, we construct an LSDF-Net, which is composed of 3D dilation layers and erosion layers. Finally, we design the dually-supervised registration network (VMLSDF ) by combining the unsupervised VoxelMorph (VM) registration network and the weakly-supervised LSDF-Net, to utilize intensity and LSDF information, respectively. RESULTS In this paper, experiments were then carried out on four public brain image datasets: LPBA40, HBN, OASIS1, and OASIS3. The experimental results show that the Dice similarity coefficient (DSC) and 95% Hausdorff distance (HD) of VMLSDF are higher than those of the original unsupervised VM and the dually-supervised registration network (VMseg ) using intensity images and segmentation labels. At the same time, the percentage of negative Jacobian determinant (NJD) of VMLSDF is lower than VMseg . Our code is freely available at https://github.com/1209684549/LSDF. CONCLUSIONS The experimental results show that LSDFs can improve the registration accuracy compared with VM and VMseg , and enhance the plausibility of the DDFs compared with VMseg .
Collapse
Affiliation(s)
- Yue Yang
- School of Information Science and Engineering, Linyi University, Linyi, Shandong, China
| | - Shunbo Hu
- School of Information Science and Engineering, Linyi University, Linyi, Shandong, China
| | - Lintao Zhang
- School of Information Science and Engineering, Linyi University, Linyi, Shandong, China
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| |
Collapse
|
17
|
Jang I, Hoffmann M, Singh N, Balbastre Y, Chen L, Rockenbach MABC, Dalca A, Aganj I, Kalpathy-Cramer J, Fischl B, Frost R. Clinical evaluation of k-space correlation informed motion artifact detection in segmented multi-slice MRI. PROCEEDINGS OF THE INTERNATIONAL SOCIETY FOR MAGNETIC RESONANCE IN MEDICINE ... SCIENTIFIC MEETING AND EXHIBITION. INTERNATIONAL SOCIETY FOR MAGNETIC RESONANCE IN MEDICINE. SCIENTIFIC MEETING AND EXHIBITION 2023; 2023:3425. [PMID: 37565069 PMCID: PMC10414784] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 08/12/2023]
Abstract
Motion artifacts can negatively impact diagnosis, patient experience, and radiology workflow especially when a patient recall is required. Detecting motion artifacts while the patient is still in the scanner could potentially improve workflow and reduce costs by enabling immediate corrective action. We demonstrate in a clinical k-space dataset that using cross-correlation between adjacent phase-encoding lines can detect motion artifacts directly from raw k-space in multi-shot multi-slice scans. We train a split-attention residual network to examine the performance in predicting motion artifact severity. The network is trained on simulated data and tested on real clinical data.
Collapse
Affiliation(s)
- Ikbeom Jang
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
- Department of Radiology, Harvard Medical School, Boston, MA, United States
| | - Malte Hoffmann
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
- Department of Radiology, Harvard Medical School, Boston, MA, United States
| | - Nalini Singh
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, United States
- Harvard-MIT Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Yael Balbastre
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | - Lina Chen
- Data Science Office, Mass General Brigham, Boston, MA, United States
| | | | - Adrian Dalca
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
- Department of Radiology, Harvard Medical School, Boston, MA, United States
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Iman Aganj
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
- Department of Radiology, Harvard Medical School, Boston, MA, United States
| | - Jayashree Kalpathy-Cramer
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
- Department of Radiology, Harvard Medical School, Boston, MA, United States
| | - Bruce Fischl
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
- Department of Radiology, Harvard Medical School, Boston, MA, United States
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, United States
- Harvard-MIT Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Robert Frost
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
- Department of Radiology, Harvard Medical School, Boston, MA, United States
| |
Collapse
|
18
|
Iglesias JE. A ready-to-use machine learning tool for symmetric multi-modality registration of brain MRI. Sci Rep 2023; 13:6657. [PMID: 37095168 PMCID: PMC10126156 DOI: 10.1038/s41598-023-33781-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Accepted: 04/19/2023] [Indexed: 04/26/2023] Open
Abstract
Volumetric registration of brain MRI is routinely used in human neuroimaging, e.g., to align different MRI modalities, to measure change in longitudinal analysis, to map an individual to a template, or in registration-based segmentation. Classical registration techniques based on numerical optimization have been very successful in this domain, and are implemented in widespread software suites like ANTs, Elastix, NiftyReg, or DARTEL. Over the last 7-8 years, learning-based techniques have emerged, which have a number of advantages like high computational efficiency, potential for higher accuracy, easy integration of supervision, and the ability to be part of a meta-architectures. However, their adoption in neuroimaging pipelines has so far been almost inexistent. Reasons include: lack of robustness to changes in MRI modality and resolution; lack of robust affine registration modules; lack of (guaranteed) symmetry; and, at a more practical level, the requirement of deep learning expertise that may be lacking at neuroimaging research sites. Here, we present EasyReg, an open-source, learning-based registration tool that can be easily used from the command line without any deep learning expertise or specific hardware. EasyReg combines the features of classical registration tools, the capabilities of modern deep learning methods, and the robustness to changes in MRI modality and resolution provided by our recent work in domain randomization. As a result, EasyReg is: fast; symmetric; diffeomorphic (and thus invertible); agnostic to MRI modality and resolution; compatible with affine and nonlinear registration; and does not require any preprocessing or parameter tuning. We present results on challenging registration tasks, showing that EasyReg is as accurate as classical methods when registering 1 mm isotropic scans within MRI modality, but much more accurate across modalities and resolutions. EasyReg is publicly available as part of FreeSurfer; see https://surfer.nmr.mgh.harvard.edu/fswiki/EasyReg .
Collapse
Affiliation(s)
- Juan Eugenio Iglesias
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, 02129, USA.
- Department of Medical Physics and Biomedical Engineering, University College London, London, WC1V 6LJ, UK.
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Boston, 02139, USA.
| |
Collapse
|
19
|
Hering A, Hansen L, Mok TCW, Chung ACS, Siebert H, Hager S, Lange A, Kuckertz S, Heldmann S, Shao W, Vesal S, Rusu M, Sonn G, Estienne T, Vakalopoulou M, Han L, Huang Y, Yap PT, Brudfors M, Balbastre Y, Joutard S, Modat M, Lifshitz G, Raviv D, Lv J, Li Q, Jaouen V, Visvikis D, Fourcade C, Rubeaux M, Pan W, Xu Z, Jian B, De Benetti F, Wodzinski M, Gunnarsson N, Sjolund J, Grzech D, Qiu H, Li Z, Thorley A, Duan J, Grosbrohmer C, Hoopes A, Reinertsen I, Xiao Y, Landman B, Huo Y, Murphy K, Lessmann N, van Ginneken B, Dalca AV, Heinrich MP. Learn2Reg: Comprehensive Multi-Task Medical Image Registration Challenge, Dataset and Evaluation in the Era of Deep Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:697-712. [PMID: 36264729 DOI: 10.1109/tmi.2022.3213983] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Image registration is a fundamental medical image analysis task, and a wide variety of approaches have been proposed. However, only a few studies have comprehensively compared medical image registration approaches on a wide range of clinically relevant tasks. This limits the development of registration methods, the adoption of research advances into practice, and a fair benchmark across competing approaches. The Learn2Reg challenge addresses these limitations by providing a multi-task medical image registration data set for comprehensive characterisation of deformable registration algorithms. A continuous evaluation will be possible at https://learn2reg.grand-challenge.org. Learn2Reg covers a wide range of anatomies (brain, abdomen, and thorax), modalities (ultrasound, CT, MR), availability of annotations, as well as intra- and inter-patient registration evaluation. We established an easily accessible framework for training and validation of 3D registration methods, which enabled the compilation of results of over 65 individual method submissions from more than 20 unique teams. We used a complementary set of metrics, including robustness, accuracy, plausibility, and runtime, enabling unique insight into the current state-of-the-art of medical image registration. This paper describes datasets, tasks, evaluation methods and results of the challenge, as well as results of further analysis of transferability to new datasets, the importance of label supervision, and resulting bias. While no single approach worked best across all tasks, many methodological aspects could be identified that push the performance of medical image registration to new state-of-the-art performance. Furthermore, we demystified the common belief that conventional registration methods have to be much slower than deep-learning-based methods.
Collapse
|
20
|
Fu J, Tzortzakakis A, Barroso J, Westman E, Ferreira D, Moreno R. Fast three-dimensional image generation for healthy brain aging using diffeomorphic registration. Hum Brain Mapp 2023; 44:1289-1308. [PMID: 36468536 PMCID: PMC9921328 DOI: 10.1002/hbm.26165] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Revised: 11/15/2022] [Accepted: 11/16/2022] [Indexed: 12/12/2022] Open
Abstract
Predicting brain aging can help in the early detection and prognosis of neurodegenerative diseases. Longitudinal cohorts of healthy subjects scanned through magnetic resonance imaging (MRI) have been essential to understand the structural brain changes due to aging. However, these cohorts suffer from missing data due to logistic issues in the recruitment of subjects. This paper proposes a methodology for filling up missing data in longitudinal cohorts with anatomically plausible images that capture the subject-specific aging process. The proposed methodology is developed within the framework of diffeomorphic registration. First, two novel modules are introduced within Synthmorph, a fast, state-of-the-art deep learning-based diffeomorphic registration method, to simulate the aging process between the first and last available MRI scan for each subject in three-dimensional (3D). The use of image registration also makes the generated images plausible by construction. Second, we used six image similarity measurements to rearrange the generated images to the specific age range. Finally, we estimated the age of every generated image by using the assumption of linear brain decay in healthy subjects. The methodology was evaluated on 2662 T1-weighted MRI scans from 796 healthy participants from 3 different longitudinal cohorts: Alzheimer's Disease Neuroimaging Initiative, Open Access Series of Imaging Studies-3, and Group of Neuropsychological Studies of the Canary Islands (GENIC). In total, we generated 7548 images to simulate the access of a scan per subject every 6 months in these cohorts. We evaluated the quality of the synthetic images using six quantitative measurements and a qualitative assessment by an experienced neuroradiologist with state-of-the-art results. The assumption of linear brain decay was accurate in these cohorts (R2 ∈ [.924, .940]). The experimental results show that the proposed methodology can produce anatomically plausible aging predictions that can be used to enhance longitudinal datasets. Compared to deep learning-based generative methods, diffeomorphic registration is more likely to preserve the anatomy of the different structures of the brain, which makes it more appropriate for its use in clinical applications. The proposed methodology is able to efficiently simulate anatomically plausible 3D MRI scans of brain aging of healthy subjects from two images scanned at two different time points.
Collapse
Affiliation(s)
- Jingru Fu
- Division of Biomedical ImagingDepartment of Biomedical Engineering and Health Systems, KTH Royal Institute of TechnologyStockholmSweden
| | - Antonios Tzortzakakis
- Division of RadiologyDepartment for Clinical Science, Intervention and Technology (CLINTEC), Karolinska InstitutetStockholmSweden
- Medical Radiation Physics and Nuclear MedicineFunctional Unit of Nuclear Medicine, Karolinska University HospitalHuddingeStockholmSweden
| | - José Barroso
- Department of PsychologyFaculty of Health Sciences, University Fernando Pessoa CanariasLas PalmasSpain
| | - Eric Westman
- Division of Clinical GeriatricsCentre for Alzheimer Research, Department of Neurobiology, Care Sciences, and Society (NVS), Karolinska InstitutetStockholmSweden
- Department of NeuroimagingCentre for Neuroimaging Sciences, Institute of Psychiatry, Psychology and Neuroscience, King's College LondonLondonUnited Kingdom
| | - Daniel Ferreira
- Division of Clinical GeriatricsCentre for Alzheimer Research, Department of Neurobiology, Care Sciences, and Society (NVS), Karolinska InstitutetStockholmSweden
| | - Rodrigo Moreno
- Division of Biomedical ImagingDepartment of Biomedical Engineering and Health Systems, KTH Royal Institute of TechnologyStockholmSweden
| | | |
Collapse
|
21
|
Miller M, Tward D, Trouvé A. Molecular Computational Anatomy: Unifying the Particle to Tissue Continuum via Measure Representations of the Brain. BME FRONTIERS 2022; 2022:9868673. [PMID: 37206893 PMCID: PMC10193958 DOI: 10.34133/2022/9868673] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Accepted: 04/11/2022] [Indexed: 12/24/2023] Open
Abstract
OBJECTIVE The objective of this research is to unify the molecular representations of spatial transcriptomics and cellular scale histology with the tissue scales of computational anatomy for brain mapping. IMPACT STATEMENT We present a unified representation theory for brain mapping based on geometric varifold measures of the microscale deterministic structure and function with the statistical ensembles of the spatially aggregated tissue scales. INTRODUCTION Mapping across coordinate systems in computational anatomy allows us to understand structural and functional properties of the brain at the millimeter scale. New measurement technologies in digital pathology and spatial transcriptomics allow us to measure the brain molecule by molecule and cell by cell based on protein and transcriptomic functional identity. We currently have no mathematical representations for integrating consistently the tissue limits with the molecular particle descriptions. The formalism derived here demonstrates the methodology for transitioning consistently from the molecular scale of quantized particles-using mathematical structures as first introduced by Dirac as the class of generalized functions-to the tissue scales with methods originally introduced by Euler for fluids. METHODS We introduce two mathematical methods based on notions of generalized functions and statistical mechanics. We use geometric varifolds, a product measure on space and function, to represent functional states at the micro-scales-electrophysiology, molecular histology-integrated with a Boltzmann-like program to pass from deterministic particle descriptions to empirical probabilities on the functional states at the tissue scales. RESULTS Our space-function varifold representation provides a recipe for traversing from molecular to tissue scales in terms of a cascade of linear space scaling composed with nonlinear functional feature mapping. Following the cascade implies every scale is a geometric measure so that a universal family of measure norms can be introduced which quantifies the geodesic connection between brains in the orbit independent of the probing technology, whether it be RNA identities, Tau or amyloid histology, spike trains, or dense MR imagery. CONCLUSIONS We demonstrate a unified brain mapping theory for molecular and tissue scales based on geometric measure representations. We call the consistent aggregation of tissue scales from particle and cellular scales, molecular computational anatomy.
Collapse
Affiliation(s)
- Michael Miller
- Department of Biomedical Engineering & Kavli Neuroscience Discovery Institute & Center for Imaging Science, Johns Hopkins University, Baltimore, USA
| | - Daniel Tward
- Departments of Computational Medicine & Neurology, University of California Los Angeles, Los Angeles, USA
| | - Alain Trouvé
- Centre Giovanni Borelli (UMR 9010), Ecole Normale Supérieure Paris-Saclay, Université Paris-Saclay, Gif-sur-Yvette, France
| |
Collapse
|
22
|
Ringel MJ, Richey WL, Heiselman JS, Luo M, Meszoely IM, Miga MI. Supine magnetic resonance image registration for breast surgery: insights on material mechanics. J Med Imaging (Bellingham) 2022; 9:065001. [PMID: 36388143 PMCID: PMC9659944 DOI: 10.1117/1.jmi.9.6.065001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2022] [Accepted: 10/26/2022] [Indexed: 11/15/2022] Open
Abstract
Purpose Breast conserving surgery (BCS) is a common procedure for early-stage breast cancer patients. Supine preoperative magnetic resonance (MR) breast imaging for visualizing tumor location and extent, while not standard for procedural guidance, is being explored since it more closely represents the surgical presentation compared to conventional diagnostic imaging positions. Despite this preoperative imaging position, deformation is still present between the supine imaging and surgical state. As a result, a fast and accurate image-to-physical registration approach is needed to realize image-guided breast surgery. Approach In this study, three registration methods were investigated on healthy volunteers' breasts ( n = 11 ) with the supine arm-down position simulating preoperative imaging and supine arm-up position simulating intraoperative presentation. The registration methods included (1) point-based rigid registration using synthetic fiducials, (2) nonrigid biomechanical model-based registration using sparse data, and (3) a data-dense three-dimensional diffeomorphic image-based registration from the Advanced Normalization Tools (ANTs) repository. Additionally, deformation metrics (volume change and anisotropy) were calculated from the ANTs deformation field to better understand breast material mechanics. Results The average target registration errors (TRE) were 10.4 ± 2.3 , 6.4 ± 1.5 , and 2.8 ± 1.3 mm (mean ± standard deviation) and the average fiducial registration errors (FRE) were 7.8 ± 1.7 , 2.5 ± 1.1 , and 3.1 ± 1.1 mm for the point-based rigid, nonrigid biomechanical, and ANTs registrations, respectively. The mechanics-based deformation metrics revealed an overall anisotropic tissue behavior and a statistically significant difference in volume change between glandular and adipose tissue, suggesting that nonrigid modeling methods may be improved by incorporating material heterogeneity and anisotropy. Conclusions Overall, registration accuracy significantly improved with increasingly flexible and data-dense registration methods. Analysis of these outcomes may inform the future development of image guidance systems for lumpectomy procedures.
Collapse
Affiliation(s)
- Morgan J. Ringel
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States
- Vanderbilt Institute for Surgery and Engineering, Nashville, Tennessee, United States
| | - Winona L. Richey
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States
- Vanderbilt Institute for Surgery and Engineering, Nashville, Tennessee, United States
| | - Jon S. Heiselman
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States
- Vanderbilt Institute for Surgery and Engineering, Nashville, Tennessee, United States
- Memorial Sloan-Kettering Cancer Center, Department of Surgery, New York, New York, United States
| | - Ma Luo
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States
- Vanderbilt Institute for Surgery and Engineering, Nashville, Tennessee, United States
| | - Ingrid M. Meszoely
- Vanderbilt University Medical Center, Division of Surgical Oncology, Nashville, Tennessee, United States
| | - Michael I. Miga
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States
- Vanderbilt Institute for Surgery and Engineering, Nashville, Tennessee, United States
- Vanderbilt University, Department of Radiology and Radiological Sciences, Nashville, Tennessee, United States
- Vanderbilt University Medical Center, Department of Neurological Surgery, Nashville, Tennessee, United States
- Vanderbilt University Medical Center, Department of Otolaryngology-Head and Neck Surgery, Nashville, Tennessee, United States
| |
Collapse
|
23
|
Dey N, Schlemper J, Mohseni Salehi SS, Zhou B, Gerig G, Sofka M. ContraReg: Contrastive Learning of Multi-modality Unsupervised Deformable Image Registration. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2022; 13436:66-77. [PMID: 37576451 PMCID: PMC10415941 DOI: 10.1007/978-3-031-16446-0_7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/15/2023]
Abstract
Establishing voxelwise semantic correspondence across distinct imaging modalities is a foundational yet formidable computer vision task. Current multi-modality registration techniques maximize hand-crafted inter-domain similarity functions, are limited in modeling nonlinear intensity-relationships and deformations, and may require significant re-engineering or underperform on new tasks, datasets, and domain pairs. This work presents ContraReg, an unsupervised contrastive representation learning approach to multi-modality deformable registration. By projecting learned multi-scale local patch features onto a jointly learned inter-domain embedding space, ContraReg obtains representations useful for non-rigid multi-modality alignment. Experimentally, ContraReg achieves accurate and robust results with smooth and invertible deformations across a series of baselines and ablations on a neonatal T1-T2 brain MRI registration task with all methods validated over a wide range of deformation regularization strengths.
Collapse
Affiliation(s)
- Neel Dey
- Department of Computer Science & Engineering, New York University, Brooklyn, NY, USA
| | | | | | - Bo Zhou
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Guido Gerig
- Department of Computer Science & Engineering, New York University, Brooklyn, NY, USA
| | | |
Collapse
|
24
|
Young SI, Balbastre Y, Dalca AV, Wells WM, Iglesias JE, Fischl B. SuperWarp: Supervised Learning and Warping on U-Net for Invariant Subvoxel-Precise Registration. BIOMEDICAL IMAGE REGISTRATION : 10TH INTERNATIONAL WORKSHOP, WBIR 2022, MUNICH, GERMANY, JULY 10-12, 2022 : PROCEEDINGS. WBIR (WORKSHOP : 2006- ) (10TH : 2022 : MUNICH, GERMANY) 2022; 13386:103-115. [PMID: 36383500 PMCID: PMC9645132 DOI: 10.1007/978-3-031-11203-4_12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
In recent years, learning-based image registration methods have gradually moved away from direct supervision with target warps to instead use self-supervision, with excellent results in several registration benchmarks. These approaches utilize a loss function that penalizes the intensity differences between the fixed and moving images, along with a suitable regularizer on the deformation. However, since images typically have large untextured regions, merely maximizing similarity between the two images is not sufficient to recover the true deformation. This problem is exacerbated by texture in other regions, which introduces severe non-convexity into the landscape of the training objective and ultimately leads to overfitting. In this paper, we argue that the relative failure of supervised registration approaches can in part be blamed on the use of regular U-Nets, which are jointly tasked with feature extraction, feature matching and deformation estimation. Here, we introduce a simple but crucial modification to the U-Net that disentangles feature extraction and matching from deformation prediction, allowing the U-Net to warp the features, across levels, as the deformation field is evolved. With this modification, direct supervision using target warps begins to outperform self-supervision approaches that require segmentations, presenting new directions for registration when images do not have segmentations. We hope that our findings in this preliminary workshop paper will re-ignite research interest in supervised image registration techniques. Our code is publicly available from http://github.com/balbasty/superwarp.
Collapse
Affiliation(s)
- Sean I Young
- Athinoula A. Martinos Center for Biomedical Imaging and Massachusetts Institute of Technology
| | - Yaël Balbastre
- Athinoula A. Martinos Center for Biomedical Imaging and Massachusetts Institute of Technology
| | - Adrian V Dalca
- Athinoula A. Martinos Center for Biomedical Imaging and Massachusetts Institute of Technology
| | - William M Wells
- Athinoula A. Martinos Center for Biomedical Imaging and Massachusetts Institute of Technology
| | - Juan Eugenio Iglesias
- Athinoula A. Martinos Center for Biomedical Imaging and Massachusetts Institute of Technology
| | - Bruce Fischl
- Athinoula A. Martinos Center for Biomedical Imaging and Massachusetts Institute of Technology
| |
Collapse
|
25
|
Hernandez M, Ramon-Julvez U, Sierra-Tome D. Partial Differential Equation-Constrained Diffeomorphic Registration from Sum of Squared Differences to Normalized Cross-Correlation, Normalized Gradient Fields, and Mutual Information: A Unifying Framework. SENSORS (BASEL, SWITZERLAND) 2022; 22:3735. [PMID: 35632143 PMCID: PMC9146848 DOI: 10.3390/s22103735] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Revised: 05/10/2022] [Accepted: 05/11/2022] [Indexed: 06/15/2023]
Abstract
This work proposes a unifying framework for extending PDE-constrained Large Deformation Diffeomorphic Metric Mapping (PDE-LDDMM) with the sum of squared differences (SSD) to PDE-LDDMM with different image similarity metrics. We focused on the two best-performing variants of PDE-LDDMM with the spatial and band-limited parameterizations of diffeomorphisms. We derived the equations for gradient-descent and Gauss-Newton-Krylov (GNK) optimization with Normalized Cross-Correlation (NCC), its local version (lNCC), Normalized Gradient Fields (NGFs), and Mutual Information (MI). PDE-LDDMM with GNK was successfully implemented for NCC and lNCC, substantially improving the registration results of SSD. For these metrics, GNK optimization outperformed gradient-descent. However, for NGFs, GNK optimization was not able to overpass the performance of gradient-descent. For MI, GNK optimization involved the product of huge dense matrices, requesting an unaffordable memory load. The extensive evaluation reported the band-limited version of PDE-LDDMM based on the deformation state equation with NCC and lNCC image similarities among the best performing PDE-LDDMM methods. In comparison with benchmark deep learning-based methods, our proposal reached or surpassed the accuracy of the best-performing models. In NIREP16, several configurations of PDE-LDDMM outperformed ANTS-lNCC, the best benchmark method. Although NGFs and MI usually underperformed the other metrics in our evaluation, these metrics showed potentially competitive results in a multimodal deformable experiment. We believe that our proposed image similarity extension over PDE-LDDMM will promote the use of physically meaningful diffeomorphisms in a wide variety of clinical applications depending on deformable image registration.
Collapse
Affiliation(s)
- Monica Hernandez
- Aragon Institute of Engineering Research (I3A), 50018 Zaragoza, Spain
- Department of Computer Sciences, University of Zaragoza (UZ), 50018 Zaragoza, Spain; (U.R.-J.); (D.S.-T.)
| | - Ubaldo Ramon-Julvez
- Department of Computer Sciences, University of Zaragoza (UZ), 50018 Zaragoza, Spain; (U.R.-J.); (D.S.-T.)
| | - Daniel Sierra-Tome
- Department of Computer Sciences, University of Zaragoza (UZ), 50018 Zaragoza, Spain; (U.R.-J.); (D.S.-T.)
| |
Collapse
|
26
|
Hoffmann M, Billot B, Greve DN, Iglesias JE, Fischl B, Dalca AV. SynthMorph: Learning Contrast-Invariant Registration Without Acquired Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:543-558. [PMID: 34587005 PMCID: PMC8891043 DOI: 10.1109/tmi.2021.3116879] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
We introduce a strategy for learning image registration without acquired imaging data, producing powerful networks agnostic to contrast introduced by magnetic resonance imaging (MRI). While classical registration methods accurately estimate the spatial correspondence between images, they solve an optimization problem for every new image pair. Learning-based techniques are fast at test time but limited to registering images with contrasts and geometric content similar to those seen during training. We propose to remove this dependency on training data by leveraging a generative strategy for diverse synthetic label maps and images that exposes networks to a wide range of variability, forcing them to learn more invariant features. This approach results in powerful networks that accurately generalize to a broad array of MRI contrasts. We present extensive experiments with a focus on 3D neuroimaging, showing that this strategy enables robust and accurate registration of arbitrary MRI contrasts even if the target contrast is not seen by the networks during training. We demonstrate registration accuracy surpassing the state of the art both within and across contrasts, using a single model. Critically, training on arbitrary shapes synthesized from noise distributions results in competitive performance, removing the dependency on acquired data of any kind. Additionally, since anatomical label maps are often available for the anatomy of interest, we show that synthesizing images from these dramatically boosts performance, while still avoiding the need for real intensity images. Our code is available at doic https://w3id.org/synthmorph.
Collapse
|