1
|
Neves Silva S, McElroy S, Aviles Verdera J, Colford K, St Clair K, Tomi-Tricot R, Uus A, Ozenne V, Hall M, Story L, Pushparajah K, Rutherford MA, Hajnal JV, Hutter J. Fully automated planning for anatomical fetal brain MRI on 0.55T. Magn Reson Med 2024; 92:1263-1276. [PMID: 38650351 DOI: 10.1002/mrm.30122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Revised: 03/08/2024] [Accepted: 04/02/2024] [Indexed: 04/25/2024]
Abstract
PURPOSE Widening the availability of fetal MRI with fully automatic real-time planning of radiological brain planes on 0.55T MRI. METHODS Deep learning-based detection of key brain landmarks on a whole-uterus echo planar imaging scan enables the subsequent fully automatic planning of the radiological single-shot Turbo Spin Echo acquisitions. The landmark detection pipeline was trained on over 120 datasets from varying field strength, echo times, and resolutions and quantitatively evaluated. The entire automatic planning solution was tested prospectively in nine fetal subjects between 20 and 37 weeks. A comprehensive evaluation of all steps, the distance between manual and automatic landmarks, the planning quality, and the resulting image quality was conducted. RESULTS Prospective automatic planning was performed in real-time without latency in all subjects. The landmark detection accuracy was 4.2± $$ \pm $$ 2.6 mm for the fetal eyes and 6.5± $$ \pm $$ 3.2 for the cerebellum, planning quality was 2.4/3 (compared to 2.6/3 for manual planning) and diagnostic image quality was 2.2 compared to 2.1 for manual planning. CONCLUSIONS Real-time automatic planning of all three key fetal brain planes was successfully achieved and will pave the way toward simplifying the acquisition of fetal MRI thereby widening the availability of this modality in nonspecialist centers.
Collapse
Affiliation(s)
- Sara Neves Silva
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Sarah McElroy
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
- MR Research Collaborations, Siemens Healthcare Limited, Camberley, UK
| | - Jordina Aviles Verdera
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Kathleen Colford
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Kamilah St Clair
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Raphael Tomi-Tricot
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
- MR Research Collaborations, Siemens Healthcare Limited, Camberley, UK
| | - Alena Uus
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Valéry Ozenne
- CNRS, CRMSB, UMR 5536, IHU Liryc, Université de Bordeaux, Bordeaux, France
| | - Megan Hall
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
- Department of Women & Children's Health, King's College London, London, UK
| | - Lisa Story
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
- Department of Women & Children's Health, King's College London, London, UK
| | - Kuberan Pushparajah
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Mary A Rutherford
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Joseph V Hajnal
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Jana Hutter
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
- Smart Imaging Lab, Radiological Institute, Friedrich-Alexander University Erlangen-Nuremberg, Erlangen, Germany
| |
Collapse
|
2
|
Hoffmann M, Hoopes A, Greve DN, Fischl B, Dalca AV. Anatomy-aware and acquisition-agnostic joint registration with SynthMorph. IMAGING NEUROSCIENCE (CAMBRIDGE, MASS.) 2024; 2:1-33. [PMID: 39015335 PMCID: PMC11247402 DOI: 10.1162/imag_a_00197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 04/27/2024] [Accepted: 05/21/2024] [Indexed: 07/18/2024]
Abstract
Affine image registration is a cornerstone of medical-image analysis. While classical algorithms can achieve excellent accuracy, they solve a time-consuming optimization for every image pair. Deep-learning (DL) methods learn a function that maps an image pair to an output transform. Evaluating the function is fast, but capturing large transforms can be challenging, and networks tend to struggle if a test-image characteristic shifts from the training domain, such as the resolution. Most affine methods are agnostic to the anatomy the user wishes to align, meaning the registration will be inaccurate if algorithms consider all structures in the image. We address these shortcomings with SynthMorph, a fast, symmetric, diffeomorphic, and easy-to-use DL tool for joint affine-deformable registration of any brain image without preprocessing. First, we leverage a strategy that trains networks with widely varying images synthesized from label maps, yielding robust performance across acquisition specifics unseen at training. Second, we optimize the spatial overlap of select anatomical labels. This enables networks to distinguish anatomy of interest from irrelevant structures, removing the need for preprocessing that excludes content which would impinge on anatomy-specific registration. Third, we combine the affine model with a deformable hypernetwork that lets users choose the optimal deformation-field regularity for their specific data, at registration time, in a fraction of the time required by classical methods. This framework is applicable to learning anatomy-aware, acquisition-agnostic registration of any anatomy with any architecture, as long as label maps are available for training. We analyze how competing architectures learn affine transforms and compare state-of-the-art registration tools across an extremely diverse set of neuroimaging data, aiming to truly capture the behavior of methods in the real world. SynthMorph demonstrates high accuracy and is available at https://w3id.org/synthmorph, as a single complete end-to-end solution for registration of brain magnetic resonance imaging (MRI) data.
Collapse
Affiliation(s)
- Malte Hoffmann
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, United States
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
- Department of Radiology, Harvard Medical School, Boston, MA, United States
| | - Andrew Hoopes
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, United States
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
- Computer Science & Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Douglas N. Greve
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, United States
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
- Department of Radiology, Harvard Medical School, Boston, MA, United States
| | - Bruce Fischl
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, United States
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
- Department of Radiology, Harvard Medical School, Boston, MA, United States
- Computer Science & Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Adrian V. Dalca
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, United States
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
- Department of Radiology, Harvard Medical School, Boston, MA, United States
- Computer Science & Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, United States
| |
Collapse
|
3
|
Aggarwal K, Manso Jimeno M, Ravi KS, Gonzalez G, Geethanath S. Developing and deploying deep learning models in brain magnetic resonance imaging: A review. NMR IN BIOMEDICINE 2023; 36:e5014. [PMID: 37539775 DOI: 10.1002/nbm.5014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Revised: 07/12/2023] [Accepted: 07/13/2023] [Indexed: 08/05/2023]
Abstract
Magnetic resonance imaging (MRI) of the brain has benefited from deep learning (DL) to alleviate the burden on radiologists and MR technologists, and improve throughput. The easy accessibility of DL tools has resulted in a rapid increase of DL models and subsequent peer-reviewed publications. However, the rate of deployment in clinical settings is low. Therefore, this review attempts to bring together the ideas from data collection to deployment in the clinic, building on the guidelines and principles that accreditation agencies have espoused. We introduce the need for and the role of DL to deliver accessible MRI. This is followed by a brief review of DL examples in the context of neuropathologies. Based on these studies and others, we collate the prerequisites to develop and deploy DL models for brain MRI. We then delve into the guiding principles to develop good machine learning practices in the context of neuroimaging, with a focus on explainability. A checklist based on the United States Food and Drug Administration's good machine learning practices is provided as a summary of these guidelines. Finally, we review the current challenges and future opportunities in DL for brain MRI.
Collapse
Affiliation(s)
- Kunal Aggarwal
- Accessible MR Laboratory, Biomedical Engineering and Imaging Institute, Department of Diagnostic, Molecular and Interventional Radiology, Mount Sinai Hospital, New York, USA
- Department of Electrical and Computer Engineering, Technical University Munich, Munich, Germany
| | - Marina Manso Jimeno
- Department of Biomedical Engineering, Columbia University in the City of New York, New York, New York, USA
- Columbia Magnetic Resonance Research Center, Columbia University in the City of New York, New York, New York, USA
| | - Keerthi Sravan Ravi
- Department of Biomedical Engineering, Columbia University in the City of New York, New York, New York, USA
- Columbia Magnetic Resonance Research Center, Columbia University in the City of New York, New York, New York, USA
| | - Gilberto Gonzalez
- Division of Neuroradiology, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Sairam Geethanath
- Accessible MR Laboratory, Biomedical Engineering and Imaging Institute, Department of Diagnostic, Molecular and Interventional Radiology, Mount Sinai Hospital, New York, USA
| |
Collapse
|
4
|
Xu J, Moyer D, Gagoski B, Iglesias JE, Grant PE, Golland P, Adalsteinsson E. NeSVoR: Implicit Neural Representation for Slice-to-Volume Reconstruction in MRI. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1707-1719. [PMID: 37018704 PMCID: PMC10287191 DOI: 10.1109/tmi.2023.3236216] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Reconstructing 3D MR volumes from multiple motion-corrupted stacks of 2D slices has shown promise in imaging of moving subjects, e. g., fetal MRI. However, existing slice-to-volume reconstruction methods are time-consuming, especially when a high-resolution volume is desired. Moreover, they are still vulnerable to severe subject motion and when image artifacts are present in acquired slices. In this work, we present NeSVoR, a resolution-agnostic slice-to-volume reconstruction method, which models the underlying volume as a continuous function of spatial coordinates with implicit neural representation. To improve robustness to subject motion and other image artifacts, we adopt a continuous and comprehensive slice acquisition model that takes into account rigid inter-slice motion, point spread function, and bias fields. NeSVoR also estimates pixel-wise and slice-wise variances of image noise and enables removal of outliers during reconstruction and visualization of uncertainty. Extensive experiments are performed on both simulated and in vivo data to evaluate the proposed method. Results show that NeSVoR achieves state-of-the-art reconstruction quality while providing two to ten-fold acceleration in reconstruction times over the state-of-the-art algorithms.
Collapse
|
5
|
Hoopes A, Mora JS, Dalca AV, Fischl B, Hoffmann M. SynthStrip: skull-stripping for any brain image. Neuroimage 2022; 260:119474. [PMID: 35842095 PMCID: PMC9465771 DOI: 10.1016/j.neuroimage.2022.119474] [Citation(s) in RCA: 36] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 06/17/2022] [Accepted: 07/11/2022] [Indexed: 01/18/2023] Open
Abstract
The removal of non-brain signal from magnetic resonance imaging (MRI) data, known as skull-stripping, is an integral component of many neuroimage analysis streams. Despite their abundance, popular classical skull-stripping methods are usually tailored to images with specific acquisition properties, namely near-isotropic resolution and T1-weighted (T1w) MRI contrast, which are prevalent in research settings. As a result, existing tools tend to adapt poorly to other image types, such as stacks of thick slices acquired with fast spin-echo (FSE) MRI that are common in the clinic. While learning-based approaches for brain extraction have gained traction in recent years, these methods face a similar burden, as they are only effective for image types seen during the training procedure. To achieve robust skull-stripping across a landscape of imaging protocols, we introduce SynthStrip, a rapid, learning-based brain-extraction tool. By leveraging anatomical segmentations to generate an entirely synthetic training dataset with anatomies, intensity distributions, and artifacts that far exceed the realistic range of medical images, SynthStrip learns to successfully generalize to a variety of real acquired brain images, removing the need for training data with target contrasts. We demonstrate the efficacy of SynthStrip for a diverse set of image acquisitions and resolutions across subject populations, ranging from newborn to adult. We show substantial improvements in accuracy over popular skull-stripping baselines - all with a single trained model. Our method and labeled evaluation data are available at https://w3id.org/synthstrip.
Collapse
Affiliation(s)
- Andrew Hoopes
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13(th) St, Charlestown, MA, USA
| | - Jocelyn S Mora
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13(th) St, Charlestown, MA, USA
| | - Adrian V Dalca
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13(th) St, Charlestown, MA, USA; Department of Radiology, Harvard Medical School, 25 Shattuck St, Boston, MA, USA; Computer Science and Artificial Intelligence Lab, Massachusetts Institute of Technology, 32 Vassar St, Cambridge, MA, USA
| | - Bruce Fischl
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13(th) St, Charlestown, MA, USA; Department of Radiology, Harvard Medical School, 25 Shattuck St, Boston, MA, USA; Computer Science and Artificial Intelligence Lab, Massachusetts Institute of Technology, 32 Vassar St, Cambridge, MA, USA; Harvard-MIT Division of Health Sciences and Technology, 77 Massachusetts Ave, Cambridge, MA, USA
| | - Malte Hoffmann
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13(th) St, Charlestown, MA, USA; Department of Radiology, Harvard Medical School, 25 Shattuck St, Boston, MA, USA.
| |
Collapse
|
6
|
Stout JN, Bedoya MA, Grant PE, Estroff JA. Fetal Neuroimaging Updates. Magn Reson Imaging Clin N Am 2021; 29:557-581. [PMID: 34717845 PMCID: PMC8562558 DOI: 10.1016/j.mric.2021.06.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
MR imaging is used in conjunction with ultrasound screening for fetal brain abnormalities because it offers better contrast, higher resolution, and has multiplanar capabilities that increase the accuracy and confidence of diagnosis. Fetal motion still severely limits the MR imaging sequences that can be acquired. We outline the current acquisition strategies for fetal brain MR imaging and discuss the near term advances that will improve its reliability. Prospective and retrospective motion correction aim to make the complement of MR neuroimaging modalities available for fetal diagnosis, improve the performance of existing modalities, and open new horizons to understanding in utero brain development.
Collapse
Affiliation(s)
- Jeffrey N Stout
- Fetal and Neonatal Neuroimaging and Developmental Science Center, Boston Children's Hospital, 300 Longwood Avenue, Boston, MA 02115, USA.
| | - M Alejandra Bedoya
- Department of Radiology, Boston Children's Hospital, 300 Longwood Avenue, Boston, MA 02115, USA
| | - P Ellen Grant
- Fetal and Neonatal Neuroimaging and Developmental Science Center, Boston Children's Hospital, 300 Longwood Avenue, Boston, MA 02115, USA; Department of Radiology, Boston Children's Hospital, 300 Longwood Avenue, Boston, MA 02115, USA; Department of Pediatrics, Boston Children's Hospital, 300 Longwood Avenue, Boston, MA 02115, USA
| | - Judy A Estroff
- Department of Radiology, Boston Children's Hospital, 300 Longwood Avenue, Boston, MA 02115, USA; Maternal Fetal Care Center, Boston Children's Hospital, 300 Longwood Avenue, Boston, MA 02115, USA
| |
Collapse
|