1
|
Faghihpirayesh R, Karimi D, Erdoğmuş D, Gholipour A. Fetal-BET: Brain Extraction Tool for Fetal MRI. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2024; 5:551-562. [PMID: 39157057 PMCID: PMC11329220 DOI: 10.1109/ojemb.2024.3426969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Revised: 05/09/2024] [Accepted: 07/07/2024] [Indexed: 08/20/2024] Open
Abstract
Goal: In this study, we address the critical challenge of fetal brain extraction from MRI sequences. Fetal MRI has played a crucial role in prenatal neurodevelopmental studies and in advancing our knowledge of fetal brain development in-utero. Fetal brain extraction is a necessary first step in most computational fetal brain MRI pipelines. However, it poses significant challenges due to 1) non-standard fetal head positioning, 2) fetal movements during examination, and 3) vastly heterogeneous appearance of the developing fetal brain and the neighboring fetal and maternal anatomy across gestation, and with various sequences and scanning conditions. Development of a machine learning method to effectively address this task requires a large and rich labeled dataset that has not been previously available. Currently, there is no method for accurate fetal brain extraction on various fetal MRI sequences. Methods: In this work, we first built a large annotated dataset of approximately 72,000 2D fetal brain MRI images. Our dataset covers the three common MRI sequences including T2-weighted, diffusion-weighted, and functional MRI acquired with different scanners. These data include images of normal and pathological brains. Using this dataset, we developed and validated deep learning methods, by exploiting the power of the U-Net style architectures, the attention mechanism, feature learning across multiple MRI modalities, and data augmentation for fast, accurate, and generalizable automatic fetal brain extraction. Results: Evaluations on independent test data, including data available from other centers, show that our method achieves accurate brain extraction on heterogeneous test data acquired with different scanners, on pathological brains, and at various gestational stages. Conclusions:By leveraging rich information from diverse multi-modality fetal MRI data, our proposed deep learning solution enables precise delineation of the fetal brain on various fetal MRI sequences. The robustness of our deep learning model underscores its potential utility for fetal brain imaging.
Collapse
Affiliation(s)
- Razieh Faghihpirayesh
- Electrical and Computer Engineering DepartmentNortheastern UniversityBostonMA02115USA
- Radiology DepartmentBoston Children's Hospital, and Harvard Medical SchoolBostonMA02115USA
| | - Davood Karimi
- Radiology DepartmentBoston Children's Hospital, and Harvard Medical SchoolBostonMA02115USA
| | - Deniz Erdoğmuş
- Electrical and Computer Engineering DepartmentNortheastern UniversityBostonMA02115USA
| | - Ali Gholipour
- Radiology DepartmentBoston Children's Hospital, and Harvard Medical SchoolBostonMA02115USA
| |
Collapse
|
2
|
Chen J, Lu R, Ye S, Guang M, Tassew TM, Jing B, Zhang G, Chen G, Shen D. Image Recovery Matters: A Recovery-Extraction Framework for Robust Fetal Brain Extraction From MR Images. IEEE J Biomed Health Inform 2024; 28:823-834. [PMID: 37995170 DOI: 10.1109/jbhi.2023.3333953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2023]
Abstract
The extraction of the fetal brain from magnetic resonance (MR) images is a challenging task. In particular, fetal MR images suffer from different kinds of artifacts introduced during the image acquisition. Among those artifacts, intensity inhomogeneity is a common one affecting brain extraction. In this work, we propose a deep learning-based recovery-extraction framework for fetal brain extraction, which is particularly effective in handling fetal MR images with intensity inhomogeneity. Our framework involves two stages. First, the artifact-corrupted images are recovered with the proposed generative adversarial learning-based image recovery network with a novel region-of-darkness discriminator that enforces the network focusing on artifacts of the images. Second, we propose a brain extraction network for more effective fetal brain segmentation by strengthening the association between lower- and higher-level features as well as suppressing task-irrelevant features. Thanks to the proposed recovery-extraction strategy, our framework is able to accurately segment fetal brains from artifact-corrupted MR images. The experiments show that our framework achieves promising performance in both quantitative and qualitative evaluations, and outperforms state-of-the-art methods in both image recovery and fetal brain extraction.
Collapse
|
3
|
Hwang IK, Kang SR, Yang S, Kim JM, Kim JE, Huh KH, Lee SS, Heo MS, Yi WJ, Kim TI. SinusC-Net for automatic classification of surgical plans for maxillary sinus augmentation using a 3D distance-guided network. Sci Rep 2023; 13:11653. [PMID: 37468515 DOI: 10.1038/s41598-023-38273-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Accepted: 07/06/2023] [Indexed: 07/21/2023] Open
Abstract
The objective of this study was to automatically classify surgical plans for maxillary sinus floor augmentation in implant placement at the maxillary posterior edentulous region using a 3D distance-guided network on CBCT images. We applied a modified ABC classification method consisting of five surgical approaches for the deep learning model. The proposed deep learning model (SinusC-Net) consisted of two stages of detection and classification according to the modified classification method. In detection, five landmarks on CBCT images were automatically detected using a volumetric regression network; in classification, the CBCT images were automatically classified as to the five surgical approaches using a 3D distance-guided network. The mean MRE for landmark detection was 0.87 mm, and SDR for 2 mm or lower, 95.47%. The mean accuracy, sensitivity, specificity, and AUC for classification by the SinusC-Net were 0.97, 0.92, 0.98, and 0.95, respectively. The deep learning model using 3D distance-guidance demonstrated accurate detection of 3D anatomical landmarks, and automatic and accurate classification of surgical approaches for sinus floor augmentation in implant placement at the maxillary posterior edentulous region.
Collapse
Affiliation(s)
- In-Kyung Hwang
- Department of Periodontology, School of Dentistry and Dental Research Institute, Seoul National University, Seoul, 03080, Republic of Korea
| | - Se-Ryong Kang
- Department of Biomedical Radiation Sciences, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, 08826, Republic of Korea
| | - Su Yang
- Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, 08826, Republic of Korea
| | - Jun-Min Kim
- Department of Electronics and Information Engineering, Hansung University, Seoul, 02876, Republic of Korea
| | - Jo-Eun Kim
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, 03080, Republic of Korea
| | - Kyung-Hoe Huh
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, 03080, Republic of Korea
| | - Sam-Sun Lee
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, 03080, Republic of Korea
| | - Min-Suk Heo
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, 03080, Republic of Korea
| | - Won-Jin Yi
- Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, 08826, Republic of Korea.
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, 03080, Republic of Korea.
| | - Tae-Il Kim
- Department of Periodontology, School of Dentistry and Dental Research Institute, Seoul National University, Seoul, 03080, Republic of Korea.
| |
Collapse
|
4
|
Zhao L, Asis-Cruz JD, Feng X, Wu Y, Kapse K, Largent A, Quistorff J, Lopez C, Wu D, Qing K, Meyer C, Limperopoulos C. Automated 3D Fetal Brain Segmentation Using an Optimized Deep Learning Approach. AJNR Am J Neuroradiol 2022; 43:448-454. [PMID: 35177547 PMCID: PMC8910820 DOI: 10.3174/ajnr.a7419] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 12/06/2021] [Indexed: 01/01/2023]
Abstract
BACKGROUND AND PURPOSE MR imaging provides critical information about fetal brain growth and development. Currently, morphologic analysis primarily relies on manual segmentation, which is time-intensive and has limited repeatability. This work aimed to develop a deep learning-based automatic fetal brain segmentation method that provides improved accuracy and robustness compared with atlas-based methods. MATERIALS AND METHODS A total of 106 fetal MR imaging studies were acquired prospectively from fetuses between 23 and 39 weeks of gestation. We trained a deep learning model on the MR imaging scans of 65 healthy fetuses and compared its performance with a 4D atlas-based segmentation method using the Wilcoxon signed-rank test. The trained model was also evaluated on data from 41 fetuses diagnosed with congenital heart disease. RESULTS The proposed method showed high consistency with the manual segmentation, with an average Dice score of 0.897. It also demonstrated significantly improved performance (P < .001) based on the Dice score and 95% Hausdorff distance in all brain regions compared with the atlas-based method. The performance of the proposed method was consistent across gestational ages. The segmentations of the brains of fetuses with high-risk congenital heart disease were also highly consistent with the manual segmentation, though the Dice score was 7% lower than that of healthy fetuses. CONCLUSIONS The proposed deep learning method provides an efficient and reliable approach for fetal brain segmentation, which outperformed segmentation based on a 4D atlas and has been used in clinical and research settings.
Collapse
Affiliation(s)
- L Zhao
- From the Department of Diagnostic Imaging and Radiology (L.Z., J.D.A.-C., Y.W., K.K., A.L., J.Q., C. Lopez, C. Limperopoulos), Developing Brain Institute, Children's National, Washington, DC
- Department of Biomedical Engineering (L.Z., D.W.), Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering & Instrument Science, Zhejiang University, China
| | - J D Asis-Cruz
- From the Department of Diagnostic Imaging and Radiology (L.Z., J.D.A.-C., Y.W., K.K., A.L., J.Q., C. Lopez, C. Limperopoulos), Developing Brain Institute, Children's National, Washington, DC
| | - X Feng
- Department of Biomedical Engineering (X.F., C.M.), University of Virginia, Charlottesville, Virginia
| | - Y Wu
- From the Department of Diagnostic Imaging and Radiology (L.Z., J.D.A.-C., Y.W., K.K., A.L., J.Q., C. Lopez, C. Limperopoulos), Developing Brain Institute, Children's National, Washington, DC
| | - K Kapse
- From the Department of Diagnostic Imaging and Radiology (L.Z., J.D.A.-C., Y.W., K.K., A.L., J.Q., C. Lopez, C. Limperopoulos), Developing Brain Institute, Children's National, Washington, DC
| | - A Largent
- From the Department of Diagnostic Imaging and Radiology (L.Z., J.D.A.-C., Y.W., K.K., A.L., J.Q., C. Lopez, C. Limperopoulos), Developing Brain Institute, Children's National, Washington, DC
| | - J Quistorff
- From the Department of Diagnostic Imaging and Radiology (L.Z., J.D.A.-C., Y.W., K.K., A.L., J.Q., C. Lopez, C. Limperopoulos), Developing Brain Institute, Children's National, Washington, DC
| | - C Lopez
- From the Department of Diagnostic Imaging and Radiology (L.Z., J.D.A.-C., Y.W., K.K., A.L., J.Q., C. Lopez, C. Limperopoulos), Developing Brain Institute, Children's National, Washington, DC
| | - D Wu
- Department of Biomedical Engineering (L.Z., D.W.), Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering & Instrument Science, Zhejiang University, China
| | - K Qing
- Department of Radiation Oncology (K.Q.), City of Hope National Center, Duarte, California
| | - C Meyer
- Department of Biomedical Engineering (X.F., C.M.), University of Virginia, Charlottesville, Virginia
| | - C Limperopoulos
- From the Department of Diagnostic Imaging and Radiology (L.Z., J.D.A.-C., Y.W., K.K., A.L., J.Q., C. Lopez, C. Limperopoulos), Developing Brain Institute, Children's National, Washington, DC
| |
Collapse
|
5
|
Single-Input Multi-Output U-Net for Automated 2D Foetal Brain Segmentation of MR Images. J Imaging 2021; 7:jimaging7100200. [PMID: 34677286 PMCID: PMC8536962 DOI: 10.3390/jimaging7100200] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Revised: 09/14/2021] [Accepted: 09/26/2021] [Indexed: 11/16/2022] Open
Abstract
In this work, we develop the Single-Input Multi-Output U-Net (SIMOU-Net), a hybrid network for foetal brain segmentation inspired by the original U-Net fused with the holistically nested edge detection (HED) network. The SIMOU-Net is similar to the original U-Net but it has a deeper architecture and takes account of the features extracted from each side output. It acts similar to an ensemble neural network, however, instead of averaging the outputs from several independently trained models, which is computationally expensive, our approach combines outputs from a single network to reduce the variance of predications and generalization errors. Experimental results using 200 normal foetal brains consisting of over 11,500 2D images produced Dice and Jaccard coefficients of 94.2 ± 5.9% and 88.7 ± 6.9%, respectively. We further tested the proposed network on 54 abnormal cases (over 3500 images) and achieved Dice and Jaccard coefficients of 91.2 ± 6.8% and 85.7 ± 6.6%, respectively.
Collapse
|
6
|
Pei Y, Chen L, Zhao F, Wu Z, Zhong T, Wang Y, Chen C, Wang L, Zhang H, Wang L, Li G. Learning Spatiotemporal Probabilistic Atlas of Fetal Brains with Anatomically Constrained Registration Network. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2021; 12907:239-248. [PMID: 35128549 PMCID: PMC8816449 DOI: 10.1007/978-3-030-87234-2_23] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Brain atlases are of fundamental importance for analyzing the dynamic neurodevelopment in fetal brain studies. Since the brain size, shape, and anatomical structures change rapidly during the prenatal period, it is essential to construct a spatiotemporal (4D) atlas equipped with tissue probability maps, which can preserve sharper early brain folding patterns for accurately characterizing dynamic changes in fetal brains and provide tissue prior informations for related tasks, e.g., segmentation, registration, and parcellation. In this work, we propose a novel unsupervised age-conditional learning framework to build temporally continuous fetal brain atlases by incorporating tissue segmentation maps, which outperforms previous traditional atlas construction methods in three aspects. First, our framework enables learning age-conditional deformable templates by leveraging the entire collection. Second, we leverage reliable brain tissue segmentation maps in addition to the low-contrast noisy intensity images to enhance the alignment of individual images. Third, a novel loss function is designed to enforce the similarity between the learned tissue probability map on the atlas and each subject tissue segmentation map after registration, thereby providing extra anatomical consistency supervision for atlas building. Our 4D temporally-continuous fetal brain atlases are constructed based on 82 healthy fetuses from 22 to 32 gestational weeks. Compared with the atlases built by the state-of-the-art algorithms, our atlases preserve more structural details and sharper folding patterns. Together with the learned tissue probability maps, our 4D fetal atlases provide a valuable reference for spatial normalization and analysis of fetal brain development.
Collapse
Affiliation(s)
- Yuchen Pei
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, Shanghai, China
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Liangjun Chen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Fenqiang Zhao
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Zhengwang Wu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Tao Zhong
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Ya Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Changan Chen
- Department of Radiology, Obstetrics and Gynecology Hospital, Fudan University, Shanghai, China
| | - Li Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - He Zhang
- Department of Radiology, Obstetrics and Gynecology Hospital, Fudan University, Shanghai, China
| | - Lisheng Wang
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, Shanghai, China
| | - Gang Li
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| |
Collapse
|
7
|
Chen J, Fang Z, Zhang G, Ling L, Li G, Zhang H, Wang L. Automatic brain extraction from 3D fetal MR image with deep learning-based multi-step framework. Comput Med Imaging Graph 2020; 88:101848. [PMID: 33385932 DOI: 10.1016/j.compmedimag.2020.101848] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Revised: 11/15/2020] [Accepted: 12/07/2020] [Indexed: 12/21/2022]
Abstract
Brain extraction is a fundamental prerequisite step in neuroimage analysis for fetus. Due to surrounding maternal tissues and unpredictable movement, brain extraction from fetal Magnetic Resonance (MR) images is a challenging task. In this paper, we propose a novel deep learning-based multi-step framework for brain extraction from 3D fetal MR images. In the first step, a global localization network is applied to estimate probability maps for brain candidates. Connected-component labeling algorithm is applied to eliminate small erroneous components and accurately locate the candidate brain area. In the second step, a local refinement network is implemented in the brain candidate area to obtain fine-grained probability maps. Final extraction results are derived by a fusion network with the two cascaded probability maps obtained from previous two steps. Experimental results demonstrate that our proposed method has superior performance compared with existing deep learning-based methods.
Collapse
Affiliation(s)
- Jian Chen
- School of Electronic, Electrical Engineering and Physics, Fujian University of Technology, Fuzhou, Fujian, 350118, China.
| | - Zhenghan Fang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, 27517, USA
| | - Guofu Zhang
- Department of Radiology, Obstetrics and Gynecology Hospital, Fudan University, Shanghai, 200011, China
| | - Lei Ling
- Department of Radiology, Obstetrics and Gynecology Hospital, Fudan University, Shanghai, 200011, China
| | - Gang Li
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, 27517, USA
| | - He Zhang
- Department of Radiology, Obstetrics and Gynecology Hospital, Fudan University, Shanghai, 200011, China.
| | - Li Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, 27517, USA.
| |
Collapse
|
8
|
Joint Image Quality Assessment and Brain Extraction of Fetal MRI Using Deep Learning. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION – MICCAI 2020 2020. [DOI: 10.1007/978-3-030-59725-2_40] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
|