1
|
Faghihpirayesh R, Karimi D, Erdoğmuş D, Gholipour A. Fetal-BET: Brain Extraction Tool for Fetal MRI. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2024; 5:551-562. [PMID: 39157057 PMCID: PMC11329220 DOI: 10.1109/ojemb.2024.3426969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Revised: 05/09/2024] [Accepted: 07/07/2024] [Indexed: 08/20/2024] Open
Abstract
Goal: In this study, we address the critical challenge of fetal brain extraction from MRI sequences. Fetal MRI has played a crucial role in prenatal neurodevelopmental studies and in advancing our knowledge of fetal brain development in-utero. Fetal brain extraction is a necessary first step in most computational fetal brain MRI pipelines. However, it poses significant challenges due to 1) non-standard fetal head positioning, 2) fetal movements during examination, and 3) vastly heterogeneous appearance of the developing fetal brain and the neighboring fetal and maternal anatomy across gestation, and with various sequences and scanning conditions. Development of a machine learning method to effectively address this task requires a large and rich labeled dataset that has not been previously available. Currently, there is no method for accurate fetal brain extraction on various fetal MRI sequences. Methods: In this work, we first built a large annotated dataset of approximately 72,000 2D fetal brain MRI images. Our dataset covers the three common MRI sequences including T2-weighted, diffusion-weighted, and functional MRI acquired with different scanners. These data include images of normal and pathological brains. Using this dataset, we developed and validated deep learning methods, by exploiting the power of the U-Net style architectures, the attention mechanism, feature learning across multiple MRI modalities, and data augmentation for fast, accurate, and generalizable automatic fetal brain extraction. Results: Evaluations on independent test data, including data available from other centers, show that our method achieves accurate brain extraction on heterogeneous test data acquired with different scanners, on pathological brains, and at various gestational stages. Conclusions:By leveraging rich information from diverse multi-modality fetal MRI data, our proposed deep learning solution enables precise delineation of the fetal brain on various fetal MRI sequences. The robustness of our deep learning model underscores its potential utility for fetal brain imaging.
Collapse
Affiliation(s)
- Razieh Faghihpirayesh
- Electrical and Computer Engineering DepartmentNortheastern UniversityBostonMA02115USA
- Radiology DepartmentBoston Children's Hospital, and Harvard Medical SchoolBostonMA02115USA
| | - Davood Karimi
- Radiology DepartmentBoston Children's Hospital, and Harvard Medical SchoolBostonMA02115USA
| | - Deniz Erdoğmuş
- Electrical and Computer Engineering DepartmentNortheastern UniversityBostonMA02115USA
| | - Ali Gholipour
- Radiology DepartmentBoston Children's Hospital, and Harvard Medical SchoolBostonMA02115USA
| |
Collapse
|
2
|
Multi-view prediction of Alzheimer's disease progression with end-to-end integrated framework. J Biomed Inform 2021; 125:103978. [PMID: 34922021 DOI: 10.1016/j.jbi.2021.103978] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Revised: 12/05/2021] [Accepted: 12/11/2021] [Indexed: 11/21/2022]
Abstract
Alzheimer's disease is a common neurodegenerative brain disease that affects the elderly population worldwide. Its early automatic detection is vital for early intervention and treatment. A common solution is to perform future cognitive score prediction based on the baseline brain structural magnetic resonance image (MRI), which can directly infer the potential severity of disease. Recently, several studies have modelled disease progression by predicting the future brain MRI that can provide visual information of brain changes over time. Nevertheless, no studies explore the intra correlation of these two solutions, and it is unknown whether the predicted MRI can assist the prediction of cognitive score. Here, instead of independent prediction, we aim to predict disease progression in multi-view, i.e., predicting subject-specific changes of cognitive score and MRI volume concurrently. To achieve this, we propose an end-to-end integrated framework, where a regression model and a generative adversarial network are integrated together and then jointly optimized. Three integration strategies are exploited to unify these two models. Moreover, considering that some brain regions, such as hippocampus and middle temporal gyrus, could change significantly during the disease progression, a region-of-interest (ROI) mask and a ROI loss are introduced into the integrated framework to leverage this anatomical prior knowledge. Experimental results on the longitudinal Alzheimer's Disease Neuroimaging Initiative dataset demonstrated that the integrated framework outperformed the independent regression model for cognitive score prediction. And its performance can be further improved with the ROI loss for both cognitive score and MRI prediction.
Collapse
|
3
|
Pei Y, Chen L, Zhao F, Wu Z, Zhong T, Wang Y, Chen C, Wang L, Zhang H, Wang L, Li G. Learning Spatiotemporal Probabilistic Atlas of Fetal Brains with Anatomically Constrained Registration Network. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2021; 12907:239-248. [PMID: 35128549 PMCID: PMC8816449 DOI: 10.1007/978-3-030-87234-2_23] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Brain atlases are of fundamental importance for analyzing the dynamic neurodevelopment in fetal brain studies. Since the brain size, shape, and anatomical structures change rapidly during the prenatal period, it is essential to construct a spatiotemporal (4D) atlas equipped with tissue probability maps, which can preserve sharper early brain folding patterns for accurately characterizing dynamic changes in fetal brains and provide tissue prior informations for related tasks, e.g., segmentation, registration, and parcellation. In this work, we propose a novel unsupervised age-conditional learning framework to build temporally continuous fetal brain atlases by incorporating tissue segmentation maps, which outperforms previous traditional atlas construction methods in three aspects. First, our framework enables learning age-conditional deformable templates by leveraging the entire collection. Second, we leverage reliable brain tissue segmentation maps in addition to the low-contrast noisy intensity images to enhance the alignment of individual images. Third, a novel loss function is designed to enforce the similarity between the learned tissue probability map on the atlas and each subject tissue segmentation map after registration, thereby providing extra anatomical consistency supervision for atlas building. Our 4D temporally-continuous fetal brain atlases are constructed based on 82 healthy fetuses from 22 to 32 gestational weeks. Compared with the atlases built by the state-of-the-art algorithms, our atlases preserve more structural details and sharper folding patterns. Together with the learned tissue probability maps, our 4D fetal atlases provide a valuable reference for spatial normalization and analysis of fetal brain development.
Collapse
Affiliation(s)
- Yuchen Pei
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, Shanghai, China
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Liangjun Chen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Fenqiang Zhao
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Zhengwang Wu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Tao Zhong
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Ya Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Changan Chen
- Department of Radiology, Obstetrics and Gynecology Hospital, Fudan University, Shanghai, China
| | - Li Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - He Zhang
- Department of Radiology, Obstetrics and Gynecology Hospital, Fudan University, Shanghai, China
| | - Lisheng Wang
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, Shanghai, China
| | - Gang Li
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| |
Collapse
|
4
|
Largent A, Kapse K, Barnett SD, De Asis-Cruz J, Whitehead M, Murnick J, Zhao L, Andersen N, Quistorff J, Lopez C, Limperopoulos C. Image Quality Assessment of Fetal Brain MRI Using Multi-Instance Deep Learning Methods. J Magn Reson Imaging 2021; 54:818-829. [PMID: 33891778 DOI: 10.1002/jmri.27649] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 04/09/2021] [Accepted: 04/12/2021] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Due to random motion of fetuses and maternal respirations, image quality of fetal brain MRIs varies considerably. To address this issue, visual inspection of the images is performed during acquisition phase and after 3D-reconstruction, and the images are re-acquired if they are deemed to be of insufficient quality. However, this process is time-consuming and subjective. Multi-instance (MI) deep learning methods (DLMs) may perform this task automatically. PURPOSE To propose an MI count-based DLM (MI-CB-DLM), an MI vote-based DLM (MI-VB-DLM), and an MI feature-embedding DLM (MI-FE-DLM) for automatic assessment of 3D fetal-brain MR image quality. To quantify influence of fetal gestational age (GA) on DLM performance. STUDY TYPE Retrospective. SUBJECTS Two hundred and seventy-one MR exams from 211 fetuses (mean GA ± SD = 30.9 ± 5.5 weeks). FIELD STRENGTH/SEQUENCE T2 -weighted single-shot fast spin-echo acquired at 1.5 T. ASSESSMENT The T2 -weighted images were reconstructed in 3D. Then, two fetal neuroradiologists, a clinical neuroscientist, and a fetal MRI technician independently labeled the reconstructed images as 1 or 0 based on image quality (1 = high; 0 = low). These labels were fused and served as ground truth. The proposed DLMs were trained and evaluated using three repeated 10-fold cross-validations (training and validation sets of 244 and 27 scans). To quantify GA influence, this variable was included as an input of the DLMs. STATISTICAL TESTS DLM performance was evaluated using precision, recall, F-score, accuracy, and AUC values. RESULTS Precision, recall, F-score, accuracy, and AUC averaged over the three cross validations were 0.85 ± 0.01, 0.85 ± 0.01, 0.85 ± 0.01, 0.85 ± 0.01, 0.93 ± 0.01, for MI-CB-DLM (without GA); 0.75 ± 0.03, 0.75 ± 0.03, 0.75 ± 0.03, 0.75 ± 0.03, 0.81 ± 0.03, for MI-VB-DLM (without GA); 0.81 ± 0.01, 0.81 ± 0.01, 0.81 ± 0.01, 0.81 ± 0.01, 0.89 ± 0.01, for MI-FE-DLM (without GA); and 0.86 ± 0.01, 0.86 ± 0.01, 0.86 ± 0.01, 0.86 ± 0.01, 0.93 ± 0.01, for MI-CB-DLM with GA. DATA CONCLUSION MI-CB-DLM performed better than other DLMs. Including GA as an input of MI-CB-DLM improved its performance. MI-CB-DLM may potentially be used to objectively and rapidly assess fetal MR image quality. EVIDENCE LEVEL 4 TECHNICAL EFFICACY: Stage 3.
Collapse
Affiliation(s)
- Axel Largent
- Developing Brain Institute, Division of Diagnostic Imaging and Radiology, Children's National Hospital, Washington, District of Columbia, USA
| | - Kushal Kapse
- Developing Brain Institute, Division of Diagnostic Imaging and Radiology, Children's National Hospital, Washington, District of Columbia, USA
| | - Scott D Barnett
- Developing Brain Institute, Division of Diagnostic Imaging and Radiology, Children's National Hospital, Washington, District of Columbia, USA
| | - Josepheen De Asis-Cruz
- Developing Brain Institute, Division of Diagnostic Imaging and Radiology, Children's National Hospital, Washington, District of Columbia, USA
| | - Matthew Whitehead
- Developing Brain Institute, Division of Diagnostic Imaging and Radiology, Children's National Hospital, Washington, District of Columbia, USA.,Department of Neurology, Children's National Hospital, Washington, District of Columbia, USA
| | - Jonathan Murnick
- Developing Brain Institute, Division of Diagnostic Imaging and Radiology, Children's National Hospital, Washington, District of Columbia, USA
| | - Li Zhao
- Developing Brain Institute, Division of Diagnostic Imaging and Radiology, Children's National Hospital, Washington, District of Columbia, USA
| | - Nicole Andersen
- Developing Brain Institute, Division of Diagnostic Imaging and Radiology, Children's National Hospital, Washington, District of Columbia, USA
| | - Jessica Quistorff
- Developing Brain Institute, Division of Diagnostic Imaging and Radiology, Children's National Hospital, Washington, District of Columbia, USA
| | - Catherine Lopez
- Developing Brain Institute, Division of Diagnostic Imaging and Radiology, Children's National Hospital, Washington, District of Columbia, USA
| | - Catherine Limperopoulos
- Developing Brain Institute, Division of Diagnostic Imaging and Radiology, Children's National Hospital, Washington, District of Columbia, USA.,Department of Radiology, Pediatrics, George Washington University, Washington, District of Columbia, USA.,Neurology School of Medicine and Health Sciences, George Washington University, Washington, District of Columbia, USA
| |
Collapse
|