1
|
Brackenier Y, Cordero-Grande L, McElroy S, Tomi-Tricot R, Barbaroux H, Bridgen P, Malik SJ, Hajnal JV. Sequence-agnostic motion-correction leveraging efficiently calibrated Pilot Tone signals. Magn Reson Med 2024; 92:1881-1897. [PMID: 38860530 DOI: 10.1002/mrm.30161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Revised: 04/18/2024] [Accepted: 05/07/2024] [Indexed: 06/12/2024]
Abstract
PURPOSE This study leverages externally generated Pilot Tone (PT) signals to perform motion-corrected brain MRI for sequences with arbitrary k-space sampling and image contrast. THEORY AND METHODS PT signals are promising external motion sensors due to their cost-effectiveness, easy workflow, and consistent performance across contrasts and sampling patterns. However, they lack robust calibration pipelines. This work calibrates PT signal to rigid motion parameters acquired during short blocks (˜4 s) of motion calibration (MC) acquisitions, which are short enough to unobstructively fit between acquisitions. MC acquisitions leverage self-navigated trajectories that enable state-of-the-art motion estimation methods for efficient calibration. To capture the range of patient motion occurring throughout the examination, distributed motion calibration (DMC) uses data acquired from MC scans distributed across the entire examination. After calibration, PT is used to retrospectively motion-correct sequences with arbitrary k-space sampling and image contrast. Additionally, a data-driven calibration refinement is proposed to tailor calibration models to individual acquisitions. In vivo experiments involving 12 healthy volunteers tested the DMC protocol's ability to robustly correct subject motion. RESULTS The proposed calibration pipeline produces pose parameters consistent with reference values, even when distributing only six of these approximately 4-s MC blocks, resulting in a total acquisition time of 22 s. In vivo motion experiments reveal significant (p < 0.05 $$ p<0.05 $$ ) improved motion correction with increased signal to residual ratio for both MPRAGE and SPACE sequences with standard k-space acquisition, especially when motion is large. Additionally, results highlight the benefits of using a distributed calibration approach. CONCLUSIONS This study presents a framework for performing motion-corrected brain MRI in sequences with arbitrary k-space encoding and contrast, using externally generated PT signals. The DMC protocol is introduced, promoting observation of patient motion occurring throughout the examination and providing a calibration pipeline suitable for clinical deployment. The method's application is demonstrated in standard volumetric MPRAGE and SPACE sequences.
Collapse
Affiliation(s)
- Yannick Brackenier
- Biomedical Engineering Department, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
- Center for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Lucilio Cordero-Grande
- Biomedical Engineering Department, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
- Center for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
- Biomedical Image Technologies, ETSI Telecomunicación, Universidad Politécnica de Madrid and CIBER-BNN, ISCIII, Madrid, Spain
| | - Sarah McElroy
- Biomedical Engineering Department, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
- Center for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
- MR Research Collaborations, Siemens Healthcare Limited, Frimley, UK
| | - Raphael Tomi-Tricot
- Biomedical Engineering Department, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
- Center for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
- MR Research Collaborations, Siemens Healthcare Limited, Frimley, UK
| | - Hugo Barbaroux
- Biomedical Engineering Department, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Philippa Bridgen
- Biomedical Engineering Department, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Shaihan J Malik
- Biomedical Engineering Department, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
- Center for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Joseph V Hajnal
- Biomedical Engineering Department, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
- Center for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| |
Collapse
|
2
|
Levac B, Kumar S, Jalal A, Tamir JI. Accelerated motion correction with deep generative diffusion models. Magn Reson Med 2024; 92:853-868. [PMID: 38688874 DOI: 10.1002/mrm.30082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 01/02/2024] [Accepted: 02/23/2024] [Indexed: 05/02/2024]
Abstract
PURPOSE The aim of this work is to develop a method to solve the ill-posed inverse problem of accelerated image reconstruction while correcting forward model imperfections in the context of subject motion during MRI examinations. METHODS The proposed solution uses a Bayesian framework based on deep generative diffusion models to jointly estimate a motion-free image and rigid motion estimates from subsampled and motion-corrupt two-dimensional (2D) k-space data. RESULTS We demonstrate the ability to reconstruct motion-free images from accelerated two-dimensional (2D) Cartesian and non-Cartesian scans without any external reference signal. We show that our method improves over existing correction techniques on both simulated and prospectively accelerated data. CONCLUSION We propose a flexible framework for retrospective motion correction of accelerated MRI based on deep generative diffusion models, with potential application to other forward model corruptions.
Collapse
Affiliation(s)
- Brett Levac
- Chandra Family Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, Texas, USA
| | - Sidharth Kumar
- Chandra Family Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, Texas, USA
| | - Ajil Jalal
- Electrical Engineering and Computer Sciences, University of California at Berkeley, Berkeley, California, USA
| | - Jonathan I Tamir
- Chandra Family Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, Texas, USA
| |
Collapse
|
3
|
Chen X, Wu W, Chiew M. Motion compensated structured low-rank reconstruction for 3D multi-shot EPI. Magn Reson Med 2024; 91:2443-2458. [PMID: 38361309 DOI: 10.1002/mrm.30019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 12/08/2023] [Accepted: 01/05/2024] [Indexed: 02/17/2024]
Abstract
PURPOSE The 3D multi-shot EPI imaging offers several benefits including higher SNR and high isotropic resolution compared to 2D single shot EPI. However, it suffers from shot-to-shot inconsistencies arising from physiologically induced phase variations and bulk motion. This work proposed a motion compensated structured low-rank (mcSLR) reconstruction method to address both issues for 3D multi-shot EPI. METHODS Structured low-rank reconstruction has been successfully used in previous work to deal with inter-shot phase variations for 3D multi-shot EPI imaging. It circumvents the estimation of phase variations by reconstructing an individual image for each phase state which are then sum-of-squares combined, exploiting their linear interdependency encoded in structured low-rank constraints. However, structured low-rank constraints become less effective in the presence of inter-shot motion, which corrupts image magnitude consistency and invalidates the linear relationship between shots. Thus, this work jointly models inter-shot phase variations and motion corruptions by incorporating rigid motion compensation for structured low-rank reconstruction, where motion estimates are obtained in a fully data-driven way without relying on external hardware or imaging navigators. RESULTS Simulation and in vivo experiments at 7T have demonstrated that the mcSLR method can effectively reduce image artifacts and improve the robustness of 3D multi-shot EPI, outperforming existing methods which only address inter-shot phase variations or motion, but not both. CONCLUSION The proposed mcSLR reconstruction compensates for rigid motion, and thus improves the validity of structured low-rank constraints, resulting in improved robustness of 3D multi-shot EPI to both inter-shot motion and phase variations.
Collapse
Affiliation(s)
- Xi Chen
- Department of Radiological Sciences, David Geffen School of Medicine at UCLA, Los Angeles, California, USA
| | - Wenchuan Wu
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, Oxfordshire, UK
| | - Mark Chiew
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, Oxfordshire, UK
- Physical Sciences, Sunnybrook Research Institute, Toronto, Ontario, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
4
|
Rizzuti G, Schakel T, Huttinga NRF, Dankbaar JW, van Leeuwen T, Sbrizzi A. Towards retrospective motion correction and reconstruction for clinical 3D brain MRI protocols with a reference contrast. MAGMA (NEW YORK, N.Y.) 2024:10.1007/s10334-024-01161-y. [PMID: 38758490 DOI: 10.1007/s10334-024-01161-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Revised: 04/19/2024] [Accepted: 04/22/2024] [Indexed: 05/18/2024]
Abstract
OBJECT In a typical MR session, several contrasts are acquired. Due to the sequential nature of the data acquisition process, the patient may experience some discomfort at some point during the session, and start moving. Hence, it is quite common to have MR sessions where some contrasts are well-resolved, while other contrasts exhibit motion artifacts. Instead of repeating the scans that are corrupted by motion, we introduce a reference-guided retrospective motion correction scheme that takes advantage of the motion-free scans, based on a generalized rigid registration routine. MATERIALS AND METHODS We focus on various existing clinical 3D brain protocols at 1.5 Tesla MRI based on Cartesian sampling. Controlled experiments with three healthy volunteers and three levels of motion are performed. RESULTS Radiological inspection confirms that the proposed method consistently ameliorates the corrupted scans. Furthermore, for the set of specific motion tests performed in this study, the quality indexes based on PSNR and SSIM shows only a modest decrease in correction quality as a function of motion complexity. DISCUSSION While the results on controlled experiments are positive, future applications to patient data will ultimately clarify whether the proposed correction scheme satisfies the radiological requirements.
Collapse
Affiliation(s)
- Gabrio Rizzuti
- Utrecht University, Heidelberglaan 8, 3584 CS, Utrecht, The Netherlands
- Universitair Medisch Centrum Utrecht, Heidelberglaan 100, 3584 CX, Utrecht, The Netherlands
| | - Tim Schakel
- Universitair Medisch Centrum Utrecht, Heidelberglaan 100, 3584 CX, Utrecht, The Netherlands
| | - Niek R F Huttinga
- Universitair Medisch Centrum Utrecht, Heidelberglaan 100, 3584 CX, Utrecht, The Netherlands
| | - Jan Willem Dankbaar
- Universitair Medisch Centrum Utrecht, Heidelberglaan 100, 3584 CX, Utrecht, The Netherlands
| | - Tristan van Leeuwen
- Utrecht University, Heidelberglaan 8, 3584 CS, Utrecht, The Netherlands
- Centrum Wiskunde & Informatica, Science Park Amsterdam 123, 1098 XG, Amsterdam, The Netherlands
| | - Alessandro Sbrizzi
- Universitair Medisch Centrum Utrecht, Heidelberglaan 100, 3584 CX, Utrecht, The Netherlands.
| |
Collapse
|
5
|
Beljaards L, Pezzotti N, Rao C, Doneva M, van Osch MJP, Staring M. AI-based motion artifact severity estimation in undersampled MRI allowing for selection of appropriate reconstruction models. Med Phys 2024; 51:3555-3565. [PMID: 38167996 DOI: 10.1002/mp.16918] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Revised: 11/30/2023] [Accepted: 12/04/2023] [Indexed: 01/05/2024] Open
Abstract
BACKGROUND Magnetic Resonance acquisition is a time consuming process, making it susceptible to patient motion during scanning. Even motion in the order of a millimeter can introduce severe blurring and ghosting artifacts, potentially necessitating re-acquisition. Magnetic Resonance Imaging (MRI) can be accelerated by acquiring only a fraction of k-space, combined with advanced reconstruction techniques leveraging coil sensitivity profiles and prior knowledge. Artificial intelligence (AI)-based reconstruction techniques have recently been popularized, but generally assume an ideal setting without intra-scan motion. PURPOSE To retrospectively detect and quantify the severity of motion artifacts in undersampled MRI data. This may prove valuable as a safety mechanism for AI-based approaches, provide useful information to the reconstruction method, or prompt for re-acquisition while the patient is still in the scanner. METHODS We developed a deep learning approach that detects and quantifies motion artifacts in undersampled brain MRI. We demonstrate that synthetically motion-corrupted data can be leveraged to train the convolutional neural network (CNN)-based motion artifact estimator, generalizing well to real-world data. Additionally, we leverage the motion artifact estimator by using it as a selector for a motion-robust reconstruction model in case a considerable amount of motion was detected, and a high data consistency model otherwise. RESULTS Training and validation were performed on 4387 and 1304 synthetically motion-corrupted images and their uncorrupted counterparts, respectively. Testing was performed on undersampled in vivo motion-corrupted data from 28 volunteers, where our model distinguished head motion from motion-free scans with 91% and 96% accuracy when trained on synthetic and on real data, respectively. It predicted a manually defined quality label ('Good', 'Medium' or 'Bad' quality) correctly in 76% and 85% of the time when trained on synthetic and real data, respectively. When used as a selector it selected the appropriate reconstruction network 93% of the time, achieving near optimal SSIM values. CONCLUSIONS The proposed method quantified motion artifact severity in undersampled MRI data with high accuracy, enabling real-time motion artifact detection that can help improve the safety and quality of AI-based reconstructions.
Collapse
Affiliation(s)
- Laurens Beljaards
- Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands
| | - Nicola Pezzotti
- Cardiologs, Philips, Paris, France
- Faculty of Computer Science, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Chinmay Rao
- Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands
| | | | | | - Marius Staring
- Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands
| |
Collapse
|
6
|
Brackenier Y, Wang N, Liao C, Cao X, Schauman S, Yurt M, Cordero-Grande L, Malik SJ, Kerr A, Hajnal JV, Setsompop K. Rapid and accurate navigators for motion and B 0 tracking using QUEEN: Quantitatively enhanced parameter estimation from navigators. Magn Reson Med 2024; 91:2028-2043. [PMID: 38173304 DOI: 10.1002/mrm.29976] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Revised: 11/28/2023] [Accepted: 11/29/2023] [Indexed: 01/05/2024]
Abstract
PURPOSE To develop a framework that jointly estimates rigid motion and polarizing magnetic field (B0 ) perturbations (δ B 0 $$ \delta {\mathbf{B}}_{\mathbf{0}} $$ ) for brain MRI using a single navigator of a few milliseconds in duration, and to additionally allow for navigator acquisition at arbitrary timings within any type of sequence to obtain high-temporal resolution estimates. THEORY AND METHODS Methods exist that match navigator data to a low-resolution single-contrast image (scout) to estimate either motion orδ B 0 $$ \delta {\mathbf{B}}_{\mathbf{0}} $$ . In this work, called QUEEN (QUantitatively Enhanced parameter Estimation from Navigators), we propose combined motion andδ B 0 $$ \delta {\mathbf{B}}_{\mathbf{0}} $$ estimation from a fast, tailored trajectory with arbitrary-contrast navigator data. To this end, the concept of a quantitative scout (Q-Scout) acquisition is proposed from which contrast-matched scout data is predicted for each navigator. Finally, navigator trajectories, contrast-matched scout, andδ B 0 $$ \delta {\mathbf{B}}_{\mathbf{0}} $$ are integrated into a motion-informed parallel-imaging framework. RESULTS Simulations and in vivo experiments show the need to modelδ B 0 $$ \delta {\mathbf{B}}_{\mathbf{0}} $$ to obtain accurate motion parameters estimated in the presence of strongδ B 0 $$ \delta {\mathbf{B}}_{\mathbf{0}} $$ . Simulations confirm that tailored navigator trajectories are needed to robustly estimate both motion andδ B 0 $$ \delta {\mathbf{B}}_{\mathbf{0}} $$ . Furthermore, experiments show that a contrast-matched scout is needed for parameter estimation from multicontrast navigator data. A retrospective, in vivo reconstruction experiment shows improved image quality when using the proposed Q-Scout and QUEEN estimation. CONCLUSIONS We developed a framework to jointly estimate rigid motion parameters andδ B 0 $$ \delta {\mathbf{B}}_{\mathbf{0}} $$ from navigators. Combing a contrast-matched scout with the proposed trajectory allows for navigator deployment in almost any sequence and/or timing, which allows for higher temporal-resolution motion andδ B 0 $$ \delta {\mathbf{B}}_{\mathbf{0}} $$ estimates.
Collapse
Affiliation(s)
| | - Nan Wang
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Congyu Liao
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Xiaozhi Cao
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Sophie Schauman
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Mahmut Yurt
- Department of Radiology, Stanford University, Stanford, California, USA
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
| | - Lucilio Cordero-Grande
- Biomedical Image Technologies, ETSI Telecomunicación, Universidad Politécnica de Madrid and CIBER-BNN, Madrid, Spain
| | - Shaihan J Malik
- Biomedical Engineering Department, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
- Center for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Adam Kerr
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
- Cognitive and Neurobiological Imaging, Stanford University, Stanford, California, USA
| | - Joseph V Hajnal
- Biomedical Engineering Department, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
- Center for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Kawin Setsompop
- Department of Radiology, Stanford University, Stanford, California, USA
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
| |
Collapse
|
7
|
Casella C, Vecchiato K, Cromb D, Guo Y, Winkler AM, Hughes E, Dillon L, Green E, Colford K, Egloff A, Siddiqui A, Price A, Grande LC, Wood TC, Malik S, Teixeira RPA, Carmichael DW, O’Muircheartaigh J. Widespread, depth-dependent cortical microstructure alterations in pediatric focal epilepsy. Epilepsia 2024; 65:739-752. [PMID: 38088235 PMCID: PMC7616339 DOI: 10.1111/epi.17861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Revised: 12/11/2023] [Accepted: 12/11/2023] [Indexed: 12/27/2023]
Abstract
OBJECTIVE Tissue abnormalities in focal epilepsy may extend beyond the presumed focus. The underlying pathophysiology of these broader changes is unclear, and it is not known whether they result from ongoing disease processes or treatment-related side effects, or whether they emerge earlier. Few studies have focused on the period of onset for most focal epilepsies, childhood. Fewer still have utilized quantitative magnetic resonance imaging (MRI), which may provide a more sensitive and interpretable measure of tissue microstructural change. Here, we aimed to determine common spatial modes of changes in cortical architecture in children with heterogeneous drug-resistant focal epilepsy and, secondarily, whether changes were related to disease severity. METHODS To assess cortical microstructure, quantitative T1 and T2 relaxometry (qT1 and qT2) was measured in 43 children with drug-resistant focal epilepsy (age range = 4-18 years) and 46 typically developing children (age range = 2-18 years). We assessed depth-dependent qT1 and qT2 values across the neocortex, as well as their gradient of change across cortical depths. We also determined whether global changes seen in group analyses were driven by focal pathologies in individual patients. Finally, as a proof-of-concept, we trained a classifier using qT1 and qT2 gradient maps from patients with radiologically defined abnormalities (MRI positive) and healthy controls, and tested whether this could classify patients without reported radiological abnormalities (MRI negative). RESULTS We uncovered depth-dependent qT1 and qT2 increases in widespread cortical areas in patients, likely representing microstructural alterations in myelin or gliosis. Changes did not correlate with disease severity measures, suggesting they may represent antecedent neurobiological alterations. Using a classifier trained with MRI-positive patients and controls, sensitivity was 71.4% at 89.4% specificity on held-out MRI-negative patients. SIGNIFICANCE These findings suggest the presence of a potential imaging endophenotype of focal epilepsy, detectable irrespective of radiologically identified abnormalities.
Collapse
Affiliation(s)
- Chiara Casella
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
- Department for Forensic and Neurodevelopmental Sciences, Institute of Psychiatry, Psychology, and Neuroscience, King’s College London, London, UK
| | - Katy Vecchiato
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
- Department for Forensic and Neurodevelopmental Sciences, Institute of Psychiatry, Psychology, and Neuroscience, King’s College London, London, UK
| | - Daniel Cromb
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
| | - Yourong Guo
- Department for Forensic and Neurodevelopmental Sciences, Institute of Psychiatry, Psychology, and Neuroscience, King’s College London, London, UK
| | - Anderson M. Winkler
- Department of Human Genetics, University of Texas Rio Grande Valley, Brownsville, Texas, USA
| | - Emer Hughes
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
| | - Louise Dillon
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
| | - Elaine Green
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
| | - Kathleen Colford
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
| | - Alexia Egloff
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
| | - Ata Siddiqui
- Department of Radiology, Guy’s and Saint Thomas’ Hospitals NHS Trust, London, UK
| | - Anthony Price
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
| | - Lucilio Cordero Grande
- Department of Biomedical Engineering, King’s College London, London, UK
- Biomedical Image Technologies, Telecommunication Engineering School (ETSIT), Technical University of Madrid, Bioengineering, Biomaterials and Nanomedicine Networking Biomedical Research Centre, National Institute of Health Carlos III, Madrid, Spain
| | - Tobias C. Wood
- Department of Neuroimaging, King’s College London, London, UK
| | - Shaihan Malik
- Department of Biomedical Engineering, King’s College London, London, UK
| | | | | | - Jonathan O’Muircheartaigh
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
- Department for Forensic and Neurodevelopmental Sciences, Institute of Psychiatry, Psychology, and Neuroscience, King’s College London, London, UK
- Medical Research Council (MRC) Centre for Neurodevelopmental Disorders, London, UK
| |
Collapse
|
8
|
Polak D, Hossbach J, Splitthoff DN, Clifford B, Lo WC, Tabari A, Lang M, Huang SY, Conklin J, Wald LL, Cauley S. Motion guidance lines for robust data consistency-based retrospective motion correction in 2D and 3D MRI. Magn Reson Med 2023; 89:1777-1790. [PMID: 36744619 PMCID: PMC10518424 DOI: 10.1002/mrm.29534] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 10/06/2022] [Accepted: 10/31/2022] [Indexed: 02/07/2023]
Abstract
PURPOSE To develop a robust retrospective motion-correction technique based on repeating k-space guidance lines for improving motion correction in Cartesian 2D and 3D brain MRI. METHODS The motion guidance lines are inserted into the standard sequence orderings for 2D turbo spin echo and 3D MPRAGE to inform a data consistency-based motion estimation and reconstruction, which can be guided by a low-resolution scout. The extremely limited number of required guidance lines are repeated during each echo train and discarded in the final image reconstruction. Thus, integration within a standard k-space acquisition ordering ensures the expected image quality/contrast and motion sensitivity of that sequence. RESULTS Through simulation and in vivo 2D multislice and 3D motion experiments, we demonstrate that respectively 2 or 4 optimized motion guidance lines per shot enables accurate motion estimation and correction. Clinically acceptable reconstruction times are achieved through fully separable on-the-fly motion optimizations (˜1 s/shot) using standard scanner GPU hardware. CONCLUSION The addition of guidance lines to scout accelerated motion estimation facilitates robust retrospective motion correction that can be effectively introduced without perturbing standard clinical protocols and workflows.
Collapse
Affiliation(s)
- Daniel Polak
- Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts, USA
- Siemens Healthcare GmbH, Erlangen, Germany
| | | | | | | | | | - Azadeh Tabari
- Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts, USA
- Harvard Medical School, Boston, Massachusetts, USA
| | - Min Lang
- Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts, USA
- Harvard Medical School, Boston, Massachusetts, USA
| | - Susie Y. Huang
- Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts, USA
- Harvard Medical School, Boston, Massachusetts, USA
- Harvard-MIT Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
| | - John Conklin
- Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts, USA
- Harvard Medical School, Boston, Massachusetts, USA
| | - Lawrence L. Wald
- Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts, USA
- Harvard Medical School, Boston, Massachusetts, USA
- Harvard-MIT Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
| | - Stephen Cauley
- Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts, USA
- Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
9
|
Solomon O, Patriat R, Braun H, Palnitkar TE, Moeller S, Auerbach EJ, Ugurbil K, Sapiro G, Harel N. Motion robust magnetic resonance imaging via efficient Fourier aggregation. Med Image Anal 2023; 83:102638. [PMID: 36257133 DOI: 10.1016/j.media.2022.102638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Revised: 09/13/2022] [Accepted: 09/15/2022] [Indexed: 02/04/2023]
Abstract
We present a method for suppressing motion artifacts in anatomical magnetic resonance acquisitions. Our proposed technique, termed MOTOR-MRI, can recover and salvage images which are otherwise heavily corrupted by motion induced artifacts and blur which renders them unusable. Contrary to other techniques, MOTOR-MRI operates on the reconstructed images and not on k-space data. It relies on breaking the standard acquisition protocol into several shorter ones (while maintaining the same total acquisition time) and subsequent efficient aggregation in Fourier space of locally sharp and consistent information among them, producing a sharp and motion mitigated image. We demonstrate the efficacy of the technique on T2-weighted turbo spin echo magnetic resonance brain scans with severe motion corruption from both 3 T and 7 T scanners and show significant qualitative and quantitative improvement in image quality. MOTOR-MRI can operate independently, or in conjunction with additional motion correction methods.
Collapse
Affiliation(s)
- Oren Solomon
- Department of Radiology, Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN, United States of America.
| | - Rémi Patriat
- Department of Radiology, Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN, United States of America
| | - Henry Braun
- Department of Radiology, Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN, United States of America
| | - Tara E Palnitkar
- Department of Radiology, Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN, United States of America
| | - Steen Moeller
- Department of Radiology, Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN, United States of America
| | - Edward J Auerbach
- Department of Radiology, Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN, United States of America
| | - Kamil Ugurbil
- Department of Radiology, Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN, United States of America
| | - Guillermo Sapiro
- Department of Electrical and Computer Engineering, Duke University, NC, United States of America; Department of Biomedical Engineering, Duke University, NC, United States of America; Department of Computer Science, Duke University, NC, United States of America; Department of Mathematics, Duke University, NC, United States of America
| | - Noam Harel
- Department of Radiology, Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN, United States of America; Department of Neurosurgery, University of Minnesota, Minneapolis, MN, United States of America
| |
Collapse
|
10
|
Mesoscopic in vivo human T 2* dataset acquired using quantitative MRI at 7 Tesla. Neuroimage 2022; 264:119733. [PMID: 36375782 DOI: 10.1016/j.neuroimage.2022.119733] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Revised: 10/15/2022] [Accepted: 11/03/2022] [Indexed: 11/13/2022] Open
Abstract
Mesoscopic (0.1-0.5 mm) interrogation of the living human brain is critical for advancing neuroscience and bridging the resolution gap with animal models. Despite the variety of MRI contrasts measured in recent years at the mesoscopic scale, in vivo quantitative imaging of T2* has not been performed. Here we provide a dataset containing empirical T2* measurements acquired at 0.35 × 0.35 × 0.35 mm3 voxel resolution using 7 Tesla MRI. To demonstrate unique features and high quality of this dataset, we generate flat map visualizations that reveal fine-scale cortical substructures such as layers and vessels, and we report quantitative depth-dependent T2* (as well as R2*) values in primary visual cortex and auditory cortex that are highly consistent across subjects. This dataset is freely available at https://doi.org/10.17605/OSF.IO/N5BJ7, and may prove useful for anatomical investigations of the human brain, as well as for improving our understanding of the basis of the T2*-weighted (f)MRI signal.
Collapse
|
11
|
Edwards AD, Rueckert D, Smith SM, Abo Seada S, Alansary A, Almalbis J, Allsop J, Andersson J, Arichi T, Arulkumaran S, Bastiani M, Batalle D, Baxter L, Bozek J, Braithwaite E, Brandon J, Carney O, Chew A, Christiaens D, Chung R, Colford K, Cordero-Grande L, Counsell SJ, Cullen H, Cupitt J, Curtis C, Davidson A, Deprez M, Dillon L, Dimitrakopoulou K, Dimitrova R, Duff E, Falconer S, Farahibozorg SR, Fitzgibbon SP, Gao J, Gaspar A, Harper N, Harrison SJ, Hughes EJ, Hutter J, Jenkinson M, Jbabdi S, Jones E, Karolis V, Kyriakopoulou V, Lenz G, Makropoulos A, Malik S, Mason L, Mortari F, Nosarti C, Nunes RG, O’Keeffe C, O’Muircheartaigh J, Patel H, Passerat-Palmbach J, Pietsch M, Price AN, Robinson EC, Rutherford MA, Schuh A, Sotiropoulos S, Steinweg J, Teixeira RPAG, Tenev T, Tournier JD, Tusor N, Uus A, Vecchiato K, Williams LZJ, Wright R, Wurie J, Hajnal JV. The Developing Human Connectome Project Neonatal Data Release. Front Neurosci 2022; 16:886772. [PMID: 35677357 PMCID: PMC9169090 DOI: 10.3389/fnins.2022.886772] [Citation(s) in RCA: 33] [Impact Index Per Article: 16.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Accepted: 04/19/2022] [Indexed: 11/24/2022] Open
Abstract
The Developing Human Connectome Project has created a large open science resource which provides researchers with data for investigating typical and atypical brain development across the perinatal period. It has collected 1228 multimodal magnetic resonance imaging (MRI) brain datasets from 1173 fetal and/or neonatal participants, together with collateral demographic, clinical, family, neurocognitive and genomic data from 1173 participants, together with collateral demographic, clinical, family, neurocognitive and genomic data. All subjects were studied in utero and/or soon after birth on a single MRI scanner using specially developed scanning sequences which included novel motion-tolerant imaging methods. Imaging data are complemented by rich demographic, clinical, neurodevelopmental, and genomic information. The project is now releasing a large set of neonatal data; fetal data will be described and released separately. This release includes scans from 783 infants of whom: 583 were healthy infants born at term; as well as preterm infants; and infants at high risk of atypical neurocognitive development. Many infants were imaged more than once to provide longitudinal data, and the total number of datasets being released is 887. We now describe the dHCP image acquisition and processing protocols, summarize the available imaging and collateral data, and provide information on how the data can be accessed.
Collapse
Affiliation(s)
- A. David Edwards
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- MRC Centre for Neurodevelopmental Disorders, King’s College London, London, United Kingdom
| | - Daniel Rueckert
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
- Institute for AI and Informatics in Medicine, Klinikum Rechts der Isar, Technical University of Munich, Munich, Germany
| | - Stephen M. Smith
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Samy Abo Seada
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging Sciences, King’s College London, London, United Kingdom
| | - Amir Alansary
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Jennifer Almalbis
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Joanna Allsop
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Jesper Andersson
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Tomoki Arichi
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- MRC Centre for Neurodevelopmental Disorders, King’s College London, London, United Kingdom
| | - Sophie Arulkumaran
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Matteo Bastiani
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
- Sir Peter Mansfield Imaging Centre, Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Dafnis Batalle
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Department of Forensic and Neurodevelopmental Sciences, Institute of Psychiatry, Psychology & Neuroscience, King’s College London, London, United Kingdom
| | - Luke Baxter
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Jelena Bozek
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
- Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia
| | - Eleanor Braithwaite
- Centre for Brain and Cognitive Development, Department of Psychological Sciences, Birkbeck, University of London, London, United Kingdom
| | - Jacqueline Brandon
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Olivia Carney
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Andrew Chew
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Daan Christiaens
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Department of Electrical Engineering, ESAT/PSI, KU Leuven, Leuven, Belgium
| | - Raymond Chung
- BioResource Centre, NIHR Biomedical Research Centre, South London and Maudsley NHS Trust, London, United Kingdom
| | - Kathleen Colford
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Lucilio Cordero-Grande
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Biomedical Image Technologies, ETSI Telecomunicación, Universidad Politécnica de Madrid and CIBER-BBN, Madrid, Spain
| | - Serena J. Counsell
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Harriet Cullen
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Department of Medical and Molecular Genetics, School of Basic and Medical Biosciences, King’s College London, London, United Kingdom
| | - John Cupitt
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Charles Curtis
- BioResource Centre, NIHR Biomedical Research Centre, South London and Maudsley NHS Trust, London, United Kingdom
| | - Alice Davidson
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Maria Deprez
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging Sciences, King’s College London, London, United Kingdom
| | - Louise Dillon
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Konstantina Dimitrakopoulou
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Translational Bioinformatics Platform, NIHR Biomedical Research Centre, Guy’s and St. Thomas’ NHS Foundation Trust and King’s College London, London, United Kingdom
| | - Ralica Dimitrova
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Department of Forensic and Neurodevelopmental Sciences, Institute of Psychiatry, Psychology & Neuroscience, King’s College London, London, United Kingdom
| | - Eugene Duff
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Shona Falconer
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Seyedeh-Rezvan Farahibozorg
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Sean P. Fitzgibbon
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Jianliang Gao
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Andreia Gaspar
- Institute for Systems and Robotics (ISR-Lisboa)/LaRSyS, Department of Bioengineering, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal
| | - Nicholas Harper
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Sam J. Harrison
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Emer J. Hughes
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Jana Hutter
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging Sciences, King’s College London, London, United Kingdom
| | - Mark Jenkinson
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Saad Jbabdi
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Emily Jones
- Centre for Brain and Cognitive Development, Department of Psychological Sciences, Birkbeck, University of London, London, United Kingdom
| | - Vyacheslav Karolis
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Vanessa Kyriakopoulou
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Gregor Lenz
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Antonios Makropoulos
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Shaihan Malik
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging Sciences, King’s College London, London, United Kingdom
| | - Luke Mason
- Centre for Brain and Cognitive Development, Department of Psychological Sciences, Birkbeck, University of London, London, United Kingdom
| | - Filippo Mortari
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Chiara Nosarti
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Department of Child and Adolescent Psychiatry, Institute of Psychiatry, Psychology and Neuroscience, King’s College London, London, United Kingdom
| | - Rita G. Nunes
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Institute for Systems and Robotics (ISR-Lisboa)/LaRSyS, Department of Bioengineering, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal
| | - Camilla O’Keeffe
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Jonathan O’Muircheartaigh
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- MRC Centre for Neurodevelopmental Disorders, King’s College London, London, United Kingdom
- Department of Forensic and Neurodevelopmental Sciences, Institute of Psychiatry, Psychology & Neuroscience, King’s College London, London, United Kingdom
| | - Hamel Patel
- BioResource Centre, NIHR Biomedical Research Centre, South London and Maudsley NHS Trust, London, United Kingdom
| | - Jonathan Passerat-Palmbach
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Maximillian Pietsch
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Department of Forensic and Neurodevelopmental Sciences, Institute of Psychiatry, Psychology & Neuroscience, King’s College London, London, United Kingdom
| | - Anthony N. Price
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging Sciences, King’s College London, London, United Kingdom
| | - Emma C. Robinson
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging Sciences, King’s College London, London, United Kingdom
| | - Mary A. Rutherford
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Andreas Schuh
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Stamatios Sotiropoulos
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
- Sir Peter Mansfield Imaging Centre, Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Johannes Steinweg
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Rui Pedro Azeredo Gomes Teixeira
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging Sciences, King’s College London, London, United Kingdom
| | - Tencho Tenev
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Jacques-Donald Tournier
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging Sciences, King’s College London, London, United Kingdom
| | - Nora Tusor
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Alena Uus
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging Sciences, King’s College London, London, United Kingdom
| | - Katy Vecchiato
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Logan Z. J. Williams
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Robert Wright
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Julia Wurie
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Joseph V. Hajnal
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging Sciences, King’s College London, London, United Kingdom
| |
Collapse
|
12
|
Brackenier Y, Cordero‐Grande L, Tomi‐Tricot R, Wilkinson T, Bridgen P, Price A, Malik SJ, De Vita E, Hajnal JV. Data‐driven motion‐corrected brain
MRI
incorporating pose‐dependent
B
0
fields. Magn Reson Med 2022; 88:817-831. [PMID: 35526212 PMCID: PMC9324873 DOI: 10.1002/mrm.29255] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Revised: 02/15/2022] [Accepted: 03/11/2022] [Indexed: 11/18/2022]
Abstract
Purpose To develop a fully data‐driven retrospective intrascan motion‐correction framework for volumetric brain MRI at ultrahigh field (7 Tesla) that includes modeling of pose‐dependent changes in polarizing magnetic (B0) fields. Theory and Methods Tissue susceptibility induces spatially varying B0 distributions in the head, which change with pose. A physics‐inspired B0 model has been deployed to model the B0 variations in the head and was validated in vivo. This model is integrated into a forward parallel imaging model for imaging in the presence of motion. Our proposal minimizes the number of added parameters, enabling the developed framework to estimate dynamic B0 variations from appropriately acquired data without requiring navigators. The effect on data‐driven motion correction is validated in simulations and in vivo. Results The applicability of the physics‐inspired B0 model was confirmed in vivo. Simulations show the need to include the pose‐dependent B0 fields in the reconstruction to improve motion‐correction performance and the feasibility of estimating B0 evolution from the acquired data. The proposed motion and B0 correction showed improved image quality for strongly corrupted data at 7 Tesla in simulations and in vivo. Conclusion We have developed a motion‐correction framework that accounts for and estimates pose‐dependent B0 fields. The method improves current state‐of‐the‐art data‐driven motion‐correction techniques when B0 dependencies cannot be neglected. The use of a compact physics‐inspired B0 model together with leveraging the parallel imaging encoding redundancy and previously proposed optimized sampling patterns enables a purely data‐driven approach.
Collapse
Affiliation(s)
- Yannick Brackenier
- Department of Biomedical Engineering, School of Biomedical Engineering and Imaging Sciences King's College London, St. Thomas' Hospital London United Kingdom
| | - Lucilio Cordero‐Grande
- Department of Biomedical Engineering, School of Biomedical Engineering and Imaging Sciences King's College London, St. Thomas' Hospital London United Kingdom
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences King's College London, St. Thomas' Hospital London United Kingdom
- Biomedical Image Technologies, ETSI Telecomunicación Universidad Politécnica de Madrid and CIBER‐BNN Madrid Spain
| | - Raphael Tomi‐Tricot
- Department of Biomedical Engineering, School of Biomedical Engineering and Imaging Sciences King's College London, St. Thomas' Hospital London United Kingdom
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences King's College London, St. Thomas' Hospital London United Kingdom
- MR Research Collaborations Siemens Healthcare Limited Frimley United Kingdom
| | - Thomas Wilkinson
- Department of Biomedical Engineering, School of Biomedical Engineering and Imaging Sciences King's College London, St. Thomas' Hospital London United Kingdom
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences King's College London, St. Thomas' Hospital London United Kingdom
| | - Philippa Bridgen
- Department of Biomedical Engineering, School of Biomedical Engineering and Imaging Sciences King's College London, St. Thomas' Hospital London United Kingdom
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences King's College London, St. Thomas' Hospital London United Kingdom
| | - Anthony Price
- Department of Biomedical Engineering, School of Biomedical Engineering and Imaging Sciences King's College London, St. Thomas' Hospital London United Kingdom
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences King's College London, St. Thomas' Hospital London United Kingdom
| | - Shaihan J. Malik
- Department of Biomedical Engineering, School of Biomedical Engineering and Imaging Sciences King's College London, St. Thomas' Hospital London United Kingdom
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences King's College London, St. Thomas' Hospital London United Kingdom
| | - Enrico De Vita
- Department of Biomedical Engineering, School of Biomedical Engineering and Imaging Sciences King's College London, St. Thomas' Hospital London United Kingdom
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences King's College London, St. Thomas' Hospital London United Kingdom
| | - Joseph V. Hajnal
- Department of Biomedical Engineering, School of Biomedical Engineering and Imaging Sciences King's College London, St. Thomas' Hospital London United Kingdom
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences King's College London, St. Thomas' Hospital London United Kingdom
| |
Collapse
|
13
|
Ljungberg E, Wood TC, Solana AB, Williams SCR, Barker GJ, Wiesinger F. Motion corrected silent ZTE neuroimaging. Magn Reson Med 2022; 88:195-210. [PMID: 35381110 PMCID: PMC9321117 DOI: 10.1002/mrm.29201] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 01/16/2022] [Accepted: 01/28/2022] [Indexed: 11/11/2022]
Abstract
Purpose To develop self‐navigated motion correction for 3D silent zero echo time (ZTE) based neuroimaging and characterize its performance for different types of head motion. Methods The proposed method termed MERLIN (Motion Estimation & Retrospective correction Leveraging Interleaved Navigators) achieves self‐navigation by using interleaved 3D phyllotaxis k‐space sampling. Low resolution navigator images are reconstructed continuously throughout the ZTE acquisition using a sliding window and co‐registered in image space relative to a fixed reference position. Rigid body motion corrections are then applied retrospectively to the k‐space trajectory and raw data and reconstructed into a final, high‐resolution ZTE image. Results MERLIN demonstrated successful and consistent motion correction for magnetization prepared ZTE images for a range of different instructed motion paradigms. The acoustic noise response of the self‐navigated phyllotaxis trajectory was found to be only slightly above ambient noise levels (<4 dBA). Conclusion Silent ZTE imaging combined with MERLIN addresses two major challenges intrinsic to MRI (i.e., subject motion and acoustic noise) in a synergistic and integrated manner without increase in scan time and thereby forms a versatile and powerful framework for clinical and research MR neuroimaging applications.
Collapse
Affiliation(s)
- Emil Ljungberg
- Department of Neuroimaging, Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, UK.,Department of Medical Radiation Physics, Lund University, Lund, Sweden
| | - Tobias C Wood
- Department of Neuroimaging, Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, UK
| | | | - Steven C R Williams
- Department of Neuroimaging, Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, UK
| | - Gareth J Barker
- Department of Neuroimaging, Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, UK
| | - Florian Wiesinger
- Department of Neuroimaging, Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, UK.,GE Healthcare, Munich, Germany
| |
Collapse
|
14
|
Verschuur AS, Boswinkel V, Tax CM, Osch JA, Nijholt IM, Slump CH, Vries LS, Wezel‐Meijler G, Leemans A, Boomsma MF. Improved neonatal brain MRI segmentation by interpolation of motion corrupted slices. J Neuroimaging 2022; 32:480-492. [PMID: 35253956 PMCID: PMC9314603 DOI: 10.1111/jon.12985] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Revised: 02/21/2022] [Accepted: 02/22/2022] [Indexed: 11/27/2022] Open
Affiliation(s)
- Anouk S. Verschuur
- Department of Radiology Isala Zwolle The Netherlands
- Image Sciences Institute University Medical Center Utrecht Utrecht The Netherlands
| | - Vivian Boswinkel
- Women and Children's Hospital Isala Zwolle The Netherlands
- UMC Utrecht Brain Center Utrecht University Utrecht The Netherlands
| | - Chantal M.W. Tax
- Image Sciences Institute University Medical Center Utrecht Utrecht The Netherlands
- Cardiff University Brain Research Imaging Centre Cardiff UK
| | | | | | - Cornelis H. Slump
- Department of Robotics and Mechatronics University of Twente Enschede The Netherlands
| | - Linda S. Vries
- Department of Neonatology Wilhelmina Children's Hospital Utrecht The Netherlands
| | | | - Alexander Leemans
- Image Sciences Institute University Medical Center Utrecht Utrecht The Netherlands
| | | |
Collapse
|
15
|
Slipsager JM, Glimberg SL, Højgaard L, Paulsen RR, Wighton P, Tisdall MD, Jaimes C, Gagoski BA, Grant PE, van der Kouwe A, Olesen OV, Frost R. Comparison of prospective and retrospective motion correction in 3D-encoded neuroanatomical MRI. Magn Reson Med 2022; 87:629-645. [PMID: 34490929 PMCID: PMC8635810 DOI: 10.1002/mrm.28991] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Revised: 07/17/2021] [Accepted: 08/10/2021] [Indexed: 02/03/2023]
Abstract
PURPOSE To compare prospective motion correction (PMC) and retrospective motion correction (RMC) in Cartesian 3D-encoded MPRAGE scans and to investigate the effects of correction frequency and parallel imaging on the performance of RMC. METHODS Head motion was estimated using a markerless tracking system and sent to a modified MPRAGE sequence, which can continuously update the imaging FOV to perform PMC. The prospective correction was applied either before each echo train (before-ET) or at every sixth readout within the ET (within-ET). RMC was applied during image reconstruction by adjusting k-space trajectories according to the measured motion. The motion correction frequency was retrospectively increased with RMC or decreased with reverse RMC. Phantom and in vivo experiments were used to compare PMC and RMC, as well as to compare within-ET and before-ET correction frequency during continuous motion. The correction quality was quantitatively evaluated using the structural similarity index measure with a reference image without motion correction and without intentional motion. RESULTS PMC resulted in superior image quality compared to RMC both visually and quantitatively. Increasing the correction frequency from before-ET to within-ET reduced the motion artifacts in RMC. A hybrid PMC and RMC correction, that is, retrospectively increasing the correction frequency of before-ET PMC to within-ET, also reduced motion artifacts. Inferior performance of RMC compared to PMC was shown with GRAPPA calibration data without intentional motion and without any GRAPPA acceleration. CONCLUSION Reductions in local Nyquist violations with PMC resulted in superior image quality compared to RMC. Increasing the motion correction frequency to within-ET reduced the motion artifacts in both RMC and PMC.
Collapse
Affiliation(s)
- Jakob M. Slipsager
- DTU Compute, Technical University of Denmark, Denmark
- Dept. of Clinical Physiology, Nuclear Medicine & PET, Rigshospitalet, University of Copenhagen, Denmark
- TracInnovations, Ballerup, Denmark
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts
| | | | - Liselotte Højgaard
- Dept. of Clinical Physiology, Nuclear Medicine & PET, Rigshospitalet, University of Copenhagen, Denmark
| | | | - Paul Wighton
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts
| | - M. Dylan Tisdall
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Camilo Jaimes
- Boston Children’s Hospital, Boston, Massachusetts
- Dept. of Radiology, Harvard Medical School, Boston, Massachusetts
| | - Borjan A. Gagoski
- Dept. of Radiology, Harvard Medical School, Boston, Massachusetts
- Fetal-Neonatal Neuroimaging & Developmental Science Center, Boston Children’s Hospital, Boston, Massachusetts
| | - P. Ellen Grant
- Dept. of Radiology, Harvard Medical School, Boston, Massachusetts
- Fetal-Neonatal Neuroimaging & Developmental Science Center, Boston Children’s Hospital, Boston, Massachusetts
| | - André van der Kouwe
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts
- Dept. of Radiology, Harvard Medical School, Boston, Massachusetts
| | - Oline V. Olesen
- DTU Compute, Technical University of Denmark, Denmark
- Dept. of Clinical Physiology, Nuclear Medicine & PET, Rigshospitalet, University of Copenhagen, Denmark
- TracInnovations, Ballerup, Denmark
| | - Robert Frost
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts
- Dept. of Radiology, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
16
|
Polak D, Splitthoff DN, Clifford B, Lo WC, Huang SY, Conklin J, Wald LL, Setsompop K, Cauley S. Scout accelerated motion estimation and reduction (SAMER). Magn Reson Med 2022; 87:163-178. [PMID: 34390505 PMCID: PMC8616778 DOI: 10.1002/mrm.28971] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Revised: 06/29/2021] [Accepted: 07/26/2021] [Indexed: 01/03/2023]
Abstract
PURPOSE To demonstrate a navigator/tracking-free retrospective motion estimation technique that facilitates clinically acceptable reconstruction time. METHODS Scout accelerated motion estimation and reduction (SAMER) uses a single 3-5 s, low-resolution scout scan and a novel sequence reordering to independently determine motion states by minimizing the data-consistency error in a SENSE plus motion forward model. This eliminates time-consuming alternating optimization as no updates to the imaging volume are required during the motion estimation. The SAMER approach was assessed quantitatively through extensive simulation and was evaluated in vivo across multiple motion scenarios and clinical imaging contrasts. Finally, SAMER was synergistically combined with advanced encoding (Wave-CAIPI) to facilitate rapid motion-free imaging. RESULTS The highly accelerated scout provided sufficient information to achieve accurate motion trajectory estimation (accuracy ~0.2 mm or degrees). The novel sequence reordering improved the stability of the motion parameter estimation and image reconstruction while preserving the clinical imaging contrast. Clinically acceptable computation times for the motion estimation (~4 s/shot) are demonstrated through a fully separable (non-alternating) motion search across the shots. Substantial artifact reduction was demonstrated in vivo as well as corresponding improvement in the quantitative error metric. Finally, the extension of SAMER to Wave-encoding enabled rapid high-quality imaging at up to R = 9-fold acceleration. CONCLUSION SAMER significantly improved the computational scalability for retrospective motion estimation and correction.
Collapse
Affiliation(s)
- Daniel Polak
- Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts, USA.,Siemens Healthcare GmbH, Erlangen, Germany
| | | | | | | | - Susie Y. Huang
- Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts, USA.,Harvard Medical School, Boston, Massachusetts, USA.,Harvard-MIT Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
| | - John Conklin
- Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts, USA.,Harvard Medical School, Boston, Massachusetts, USA
| | - Lawrence L. Wald
- Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts, USA.,Harvard Medical School, Boston, Massachusetts, USA.,Harvard-MIT Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
| | - Kawin Setsompop
- Department of Radiology, Stanford School of Medicine, Stanford, California, USA
| | - Stephen Cauley
- Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts, USA.,Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
17
|
Vecchiato K, Egloff A, Carney O, Siddiqui A, Hughes E, Dillon L, Colford K, Green E, Texeira RPAG, Price AN, Ferrazzi G, Hajnal JV, Carmichael DW, Cordero-Grande L, O'Muircheartaigh J. Evaluation of DISORDER: Retrospective Image Motion Correction for Volumetric Brain MRI in a Pediatric Setting. AJNR Am J Neuroradiol 2021; 42:774-781. [PMID: 33602745 DOI: 10.3174/ajnr.a7001] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2020] [Accepted: 11/02/2020] [Indexed: 12/31/2022]
Abstract
BACKGROUND AND PURPOSE Head motion causes image degradation in brain MR imaging examinations, negatively impacting image quality, especially in pediatric populations. Here, we used a retrospective motion correction technique in children and assessed image quality improvement for 3D MR imaging acquisitions. MATERIALS AND METHODS We prospectively acquired brain MR imaging at 3T using 3D sequences, T1-weighted MPRAGE, T2-weighted TSE, and FLAIR in 32 unsedated children, including 7 with epilepsy (age range, 2-18 years). We implemented a novel motion correction technique through a modification of k-space data acquisition: Distributed and Incoherent Sample Orders for Reconstruction Deblurring by using Encoding Redundancy (DISORDER). For each participant and technique, we obtained 3 reconstructions as acquired (Aq), after DISORDER motion correction (Di), and Di with additional outlier rejection (DiOut). We analyzed 288 images quantitatively, measuring 2 objective no-reference image quality metrics: gradient entropy (GE) and MPRAGE white matter (WM) homogeneity. As a qualitative metric, we presented blinded and randomized images to 2 expert neuroradiologists who scored them for clinical readability. RESULTS Both image quality metrics improved after motion correction for all modalities, and improvement correlated with the amount of intrascan motion. Neuroradiologists also considered the motion corrected images as of higher quality (Wilcoxon z = -3.164 for MPRAGE; z = -2.066 for TSE; z = -2.645 for FLAIR; all P < .05). CONCLUSIONS Retrospective image motion correction with DISORDER increased image quality both from an objective and qualitative perspective. In 75% of sessions, at least 1 sequence was improved by this approach, indicating the benefit of this technique in unsedated children for both clinical and research environments.
Collapse
Affiliation(s)
- K Vecchiato
- From the Department for Forensic and Neurodevelopmental Sciences (K.V., J.O.), Institute of Psychiatry, Psychology and Neuroscience .,Centre for the Developing Brain (K.V., A.E., O.C., E.H., L.D., K.C., E.G., R.P.A.G.T., A.N.P., J.V.H., L.C.-G., J.O.), School of Biomedical Engineering and Imaging Sciences
| | - A Egloff
- Centre for the Developing Brain (K.V., A.E., O.C., E.H., L.D., K.C., E.G., R.P.A.G.T., A.N.P., J.V.H., L.C.-G., J.O.), School of Biomedical Engineering and Imaging Sciences
| | - O Carney
- Centre for the Developing Brain (K.V., A.E., O.C., E.H., L.D., K.C., E.G., R.P.A.G.T., A.N.P., J.V.H., L.C.-G., J.O.), School of Biomedical Engineering and Imaging Sciences.,Department of Radiology (O.C.), Great Ormond Street Hospital for Children, NHS Foundation Trust London, United Kingdom
| | - A Siddiqui
- Department of Radiology (A.S.), Guy's and Saint Thomas' Hospitals NHS Trust, London, United Kingdom
| | - E Hughes
- Centre for the Developing Brain (K.V., A.E., O.C., E.H., L.D., K.C., E.G., R.P.A.G.T., A.N.P., J.V.H., L.C.-G., J.O.), School of Biomedical Engineering and Imaging Sciences
| | - L Dillon
- Centre for the Developing Brain (K.V., A.E., O.C., E.H., L.D., K.C., E.G., R.P.A.G.T., A.N.P., J.V.H., L.C.-G., J.O.), School of Biomedical Engineering and Imaging Sciences
| | - K Colford
- Centre for the Developing Brain (K.V., A.E., O.C., E.H., L.D., K.C., E.G., R.P.A.G.T., A.N.P., J.V.H., L.C.-G., J.O.), School of Biomedical Engineering and Imaging Sciences
| | - E Green
- Centre for the Developing Brain (K.V., A.E., O.C., E.H., L.D., K.C., E.G., R.P.A.G.T., A.N.P., J.V.H., L.C.-G., J.O.), School of Biomedical Engineering and Imaging Sciences
| | - R P A G Texeira
- Centre for the Developing Brain (K.V., A.E., O.C., E.H., L.D., K.C., E.G., R.P.A.G.T., A.N.P., J.V.H., L.C.-G., J.O.), School of Biomedical Engineering and Imaging Sciences
| | - A N Price
- Centre for the Developing Brain (K.V., A.E., O.C., E.H., L.D., K.C., E.G., R.P.A.G.T., A.N.P., J.V.H., L.C.-G., J.O.), School of Biomedical Engineering and Imaging Sciences
| | - G Ferrazzi
- IRCCS San Camillo Hospital (G.F.), Venice, Italy
| | - J V Hajnal
- Centre for the Developing Brain (K.V., A.E., O.C., E.H., L.D., K.C., E.G., R.P.A.G.T., A.N.P., J.V.H., L.C.-G., J.O.), School of Biomedical Engineering and Imaging Sciences
| | - D W Carmichael
- EPSRC/Wellcome Centre for Medical Engineering, Biomedical Engineering (D.W.C.)
| | - L Cordero-Grande
- Centre for the Developing Brain (K.V., A.E., O.C., E.H., L.D., K.C., E.G., R.P.A.G.T., A.N.P., J.V.H., L.C.-G., J.O.), School of Biomedical Engineering and Imaging Sciences .,Biomedical Image Technologies, ETSI Telecomunicación (L.C.-G.), Universidad Politécnica de Madrid & CIBER-BBN, Madrid, Spain
| | - J O'Muircheartaigh
- From the Department for Forensic and Neurodevelopmental Sciences (K.V., J.O.), Institute of Psychiatry, Psychology and Neuroscience.,Centre for the Developing Brain (K.V., A.E., O.C., E.H., L.D., K.C., E.G., R.P.A.G.T., A.N.P., J.V.H., L.C.-G., J.O.), School of Biomedical Engineering and Imaging Sciences.,MRC Centre for Neurodevelopmental Disorders (J.O.), King's College London, London, United Kingdom
| |
Collapse
|
18
|
Duffy BA, Zhao L, Sepehrband F, Min J, Wang DJ, Shi Y, Toga AW, Kim H. Retrospective motion artifact correction of structural MRI images using deep learning improves the quality of cortical surface reconstructions. Neuroimage 2021; 230:117756. [PMID: 33460797 DOI: 10.1016/j.neuroimage.2021.117756] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 12/23/2020] [Accepted: 01/07/2021] [Indexed: 11/28/2022] Open
Abstract
Head motion during MRI acquisition presents significant challenges for neuroimaging analyses. In this work, we present a retrospective motion correction framework built on a Fourier domain motion simulation model combined with established 3D convolutional neural network (CNN) architectures. Quantitative evaluation metrics were used to validate the method on three separate multi-site datasets. The 3D CNN was trained using motion-free images that were corrupted using simulated artifacts. CNN based correction successfully diminished the severity of artifacts on real motion affected data on a separate test dataset as measured by significant improvements in image quality metrics compared to a minimal motion reference image. On the test set of 13 image pairs, the mean peak signal-to-noise-ratio was improved from 31.7 to 33.3 dB. Furthermore, improvements in cortical surface reconstruction quality were demonstrated using a blinded manual quality assessment on the Parkinson's Progression Markers Initiative (PPMI) dataset. Upon applying the correction algorithm, out of a total of 617 images, the number of quality control failures was reduced from 61 to 38. On this same dataset, we investigated whether motion correction resulted in a more statistically significant relationship between cortical thickness and Parkinson's disease. Before correction, significant cortical thinning was found to be restricted to limited regions within the temporal and frontal lobes. After correction, there was found to be more widespread and significant cortical thinning bilaterally across the temporal lobes and frontal cortex. Our results highlight the utility of image domain motion correction for use in studies with a high prevalence of motion artifacts, such as studies of movement disorders as well as infant and pediatric subjects.
Collapse
Affiliation(s)
- Ben A Duffy
- Laboratory of Neuro Imaging (LONI), Stevens Institute for Neuroimaging and Informatics, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Lu Zhao
- Laboratory of Neuro Imaging (LONI), Stevens Institute for Neuroimaging and Informatics, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Farshid Sepehrband
- Laboratory of Neuro Imaging (LONI), Stevens Institute for Neuroimaging and Informatics, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Joyce Min
- Laboratory of Neuro Imaging (LONI), Stevens Institute for Neuroimaging and Informatics, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Danny Jj Wang
- Laboratory of Neuro Imaging (LONI), Stevens Institute for Neuroimaging and Informatics, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Yonggang Shi
- Laboratory of Neuro Imaging (LONI), Stevens Institute for Neuroimaging and Informatics, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Arthur W Toga
- Laboratory of Neuro Imaging (LONI), Stevens Institute for Neuroimaging and Informatics, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Hosung Kim
- Laboratory of Neuro Imaging (LONI), Stevens Institute for Neuroimaging and Informatics, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA.
| | -
- Laboratory of Neuro Imaging (LONI), Stevens Institute for Neuroimaging and Informatics, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| |
Collapse
|
19
|
Cordero-Grande L, Ferrazzi G, Teixeira RPAG, O'Muircheartaigh J, Price AN, Hajnal JV. Motion-corrected MRI with DISORDER: Distributed and incoherent sample orders for reconstruction deblurring using encoding redundancy. Magn Reson Med 2020; 84. [PMID: 31898832 PMCID: PMC7392051 DOI: 10.1002/mrm.28157] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2019] [Revised: 11/30/2019] [Accepted: 12/11/2019] [Indexed: 11/11/2022]
Abstract
PURPOSE To enable rigid body motion-tolerant parallel volumetric magnetic resonance imaging by retrospective head motion correction on a variety of spatiotemporal scales and imaging sequences. THEORY AND METHODS Tolerance against rigid body motion is based on distributed and incoherent sampling orders for boosting a joint retrospective motion estimation and reconstruction framework. Motion resilience stems from the encoding redundancy in the data, as generally provided by the coil array. Hence, it does not require external sensors, navigators or training data, so the methodology is readily applicable to sequences using 3D encodings. RESULTS Simulations are performed showing full inter-shot corrections for usual levels of in vivo motion, large number of shots, standard levels of noise and moderate acceleration factors. Feasibility of inter- and intra-shot corrections is shown under controlled motion in vivo. Practical efficacy is illustrated by high-quality results in most corrupted of 208 volumes from a series of 26 clinical pediatric examinations collected using standard protocols. CONCLUSIONS The proposed framework addresses the rigid motion problem in volumetric anatomical brain scans with sufficient encoding redundancy which has enabled reliable pediatric examinations without sedation.
Collapse
Affiliation(s)
- Lucilio Cordero-Grande
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK.,Biomedical Engineering Department, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Giulio Ferrazzi
- Biomedical Engineering Department, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Rui Pedro A G Teixeira
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK.,Biomedical Engineering Department, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Jonathan O'Muircheartaigh
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK.,Department of Forensic and Neurodevelopmental Sciences, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, UK
| | - Anthony N Price
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK.,Biomedical Engineering Department, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Joseph V Hajnal
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK.,Biomedical Engineering Department, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| |
Collapse
|