1
|
Vahedifard F, Adepoju JO, Supanich M, Ai HA, Liu X, Kocak M, Marathu KK, Byrd SE. Review of deep learning and artificial intelligence models in fetal brain magnetic resonance imaging. World J Clin Cases 2023; 11:3725-3735. [PMID: 37383127 PMCID: PMC10294149 DOI: 10.12998/wjcc.v11.i16.3725] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Revised: 01/30/2023] [Accepted: 05/06/2023] [Indexed: 06/02/2023] Open
Abstract
Central nervous system abnormalities in fetuses are fairly common, happening in 0.1% to 0.2% of live births and in 3% to 6% of stillbirths. So initial detection and categorization of fetal Brain abnormalities are critical. Manually detecting and segmenting fetal brain magnetic resonance imaging (MRI) could be time-consuming, and susceptible to interpreter experience. Artificial intelligence (AI) algorithms and machine learning approaches have a high potential for assisting in the early detection of these problems, improving the diagnosis process and follow-up procedures. The use of AI and machine learning techniques in fetal brain MRI was the subject of this narrative review paper. Using AI, anatomic fetal brain MRI processing has investigated models to predict specific landmarks and segmentation automatically. All gestation age weeks (17-38 wk) and different AI models (mainly Convolutional Neural Network and U-Net) have been used. Some models' accuracy achieved 95% and more. AI could help preprocess and post-process fetal images and reconstruct images. Also, AI can be used for gestational age prediction (with one-week accuracy), fetal brain extraction, fetal brain segmentation, and placenta detection. Some fetal brain linear measurements, such as Cerebral and Bone Biparietal Diameter, have been suggested. Classification of brain pathology was studied using diagonal quadratic discriminates analysis, K-nearest neighbor, random forest, naive Bayes, and radial basis function neural network classifiers. Deep learning methods will become more powerful as more large-scale, labeled datasets become available. Having shared fetal brain MRI datasets is crucial because there aren not many fetal brain pictures available. Also, physicians should be aware of AI's function in fetal brain MRI, particularly neuroradiologists, general radiologists, and perinatologists.
Collapse
Affiliation(s)
- Farzan Vahedifard
- Department of Diagnostic Radiology and Nuclear Medicine, Rush Medical College, Chicago, IL 606012, United States
| | - Jubril O Adepoju
- Department of Diagnostic Radiology and Nuclear Medicine, Rush Medical College, Chicago, IL 606012, United States
| | - Mark Supanich
- Division for Diagnostic Medical Physics, Department of Radiology and Nuclear Medicine, Rush University Medical Center, Chicago, IL 606012, United States
| | - Hua Asher Ai
- Division for Diagnostic Medical Physics, Department of Radiology and Nuclear Medicine, Rush University Medical Center, Chicago, IL 606012, United States
| | - Xuchu Liu
- Department of Diagnostic Radiology and Nuclear Medicine, Rush Medical College, Chicago, IL 606012, United States
| | - Mehmet Kocak
- Department of Diagnostic Radiology and Nuclear Medicine, Rush Medical College, Chicago, IL 606012, United States
| | - Kranthi K Marathu
- Department of Diagnostic Radiology and Nuclear Medicine, Rush Medical College, Chicago, IL 606012, United States
| | - Sharon E Byrd
- Department of Diagnostic Radiology and Nuclear Medicine, Rush Medical College, Chicago, IL 606012, United States
| |
Collapse
|
2
|
Huang J, Do QN, Shahed M, Xi Y, Lewis MA, Herrera CL, Owen D, Spong CY, Madhuranthakam AJ, Twickler DM, Fei B. Deep learning based automatic segmentation of the placenta and uterine cavity on prenatal MR images. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2023; 12465:124650N. [PMID: 38486806 PMCID: PMC10937245 DOI: 10.1117/12.2653659] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/17/2024]
Abstract
Magnetic resonance imaging (MRI) has potential benefits in understanding fetal and placental complications in pregnancy. An accurate segmentation of the uterine cavity and placenta can help facilitate fast and automated analyses of placenta accreta spectrum and other pregnancy complications. In this study, we trained a deep neural network for fully automatic segmentation of the uterine cavity and placenta from MR images of pregnant women with and without placental abnormalities. The two datasets were axial MRI data of 241 pregnant women, among whom, 101 patients also had sagittal MRI data. Our trained model was able to perform fully automatic 3D segmentation of MR image volumes and achieved an average Dice similarity coefficient (DSC) of 92% for uterine cavity and of 82% for placenta on the sagittal dataset and an average DSC of 87% for uterine cavity and of 82% for placenta on the axial dataset. Use of our automatic segmentation method is the first step in designing an analytics tool for to assess the risk of pregnant women with placenta accreta spectrum.
Collapse
Affiliation(s)
- James Huang
- Department of Bioengineering, The University of Texas at Dallas, TX
- Center for Imaging and Surgical Innovation, The University of Texas at Dallas, TX
| | - Quyen N. Do
- Department of Radiology, The University of Texas Southwestern Medical Center, Dallas, TX
| | - Maysam Shahed
- Department of Bioengineering, The University of Texas at Dallas, TX
- Center for Imaging and Surgical Innovation, The University of Texas at Dallas, TX
| | - Yin Xi
- Department of Radiology, The University of Texas Southwestern Medical Center, Dallas, TX
- Department of Population and Data Sciences, The University of Texas Southwestern Medical Center, Dallas, TX
| | - Matthew A. Lewis
- Department of Radiology, The University of Texas Southwestern Medical Center, Dallas, TX
| | - Christina L. Herrera
- Department of Obstetrics and Gynecology, The University of Texas Southwestern Medical Center, Dallas, TX
| | - David Owen
- Department of Obstetrics and Gynecology, The University of Texas Southwestern Medical Center, Dallas, TX
| | - Catherine Y. Spong
- Department of Obstetrics and Gynecology, The University of Texas Southwestern Medical Center, Dallas, TX
| | | | - Diane M. Twickler
- Department of Radiology, The University of Texas Southwestern Medical Center, Dallas, TX
- Department of Obstetrics and Gynecology, The University of Texas Southwestern Medical Center, Dallas, TX
| | - Baowei Fei
- Department of Bioengineering, The University of Texas at Dallas, TX
- Center for Imaging and Surgical Innovation, The University of Texas at Dallas, TX
- Department of Radiology, The University of Texas Southwestern Medical Center, Dallas, TX
| |
Collapse
|
3
|
Shi T, Shahedi M, Caughlin K, Dormer JD, Ma L, Fei B. Semi-automated three-dimensional segmentation for cardiac CT images using deep learning and randomly distributed points. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2022; 12034:120341W. [PMID: 36793655 PMCID: PMC9928521 DOI: 10.1117/12.2611594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
Given the prevalence of cardiovascular diseases (CVDs), the segmentation of the heart on cardiac computed tomography (CT) remains of great importance. Manual segmentation is time-consuming and intra-and inter-observer variabilities yield inconsistent and inaccurate results. Computer-assisted, and in particular, deep learning approaches to segmentation continue to potentially offer an accurate, efficient alternative to manual segmentation. However, fully automated methods for cardiac segmentation have yet to achieve accurate enough results to compete with expert segmentation. Thus, we focus on a semi-automated deep learning approach to cardiac segmentation that bridges the divide between a higher accuracy from manual segmentation and higher efficiency from fully automated methods. In this approach, we selected a fixed number of points along the surface of the cardiac region to mimic user interaction. Points-distance maps were then generated from these points selections, and a three-dimensional (3D) fully convolutional neural network (FCNN) was trained using points-distance maps to provide a segmentation prediction. Testing our method with different numbers of selected points, we achieved a Dice score from 0.742 to 0.917 across the four chambers. Specifically. Dice scores averaged 0.846 ± 0.059, 0.857 ± 0.052, 0.826 ± 0.062, and 0.824 ± 0.062 for the left atrium, left ventricle, right atrium, and right ventricle, respectively across all points selections. This point-guided, image-independent, deep learning segmentation approach illustrated a promising performance for chamber-by-chamber delineation of the heart in CT images.
Collapse
Affiliation(s)
- Ted Shi
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX
| | - Maysam Shahedi
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX
- Center for Imaging and Surgical Innovation, The University of Texas at Dallas, Richardson, TX
| | - Kayla Caughlin
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX
| | - James D. Dormer
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX
- Center for Imaging and Surgical Innovation, The University of Texas at Dallas, Richardson, TX
| | - Ling Ma
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX
- Center for Imaging and Surgical Innovation, The University of Texas at Dallas, Richardson, TX
| | - Baowei Fei
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX
- Center for Imaging and Surgical Innovation, The University of Texas at Dallas, Richardson, TX
- Department of Radiology, University of Texas Southwestern Medical Center, Dallas, TX
| |
Collapse
|
4
|
Summers RM, Giger ML. SPIE Computer-Aided Diagnosis conference anniversary review. J Med Imaging (Bellingham) 2022; 9:012208. [PMID: 35607354 PMCID: PMC9119306 DOI: 10.1117/1.jmi.9.s1.012208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 04/13/2022] [Indexed: 08/28/2024] Open
Abstract
The SPIE Computer-Aided Diagnosis conference has been held for 16 consecutive years at the annual SPIE Medical Imaging symposium. The conference remains vibrant, with a core group of submitters as well as new submitters and attendees each year. Recent developments include a marked shift in submissions relating to the artificial intelligence revolution in medical image analysis. This review describes the topics and trends observed in research presented at the Computer-Aided Diagnosis conference as part of the 50th-anniversary celebration of SPIE Medical Imaging.
Collapse
Affiliation(s)
- Ronald M. Summers
- National Institutes of Health, Radiology and Imaging Sciences, Clinical Center, Bethesda, Maryland, United States
| | - Maryellen L. Giger
- University of Chicago, Department of Radiology and Committee on Medical Physics, Chicago, Illinois, United States
| |
Collapse
|
5
|
Shahedi M, Spong CY, Dormer JD, Do QN, Xi Y, Lewis MA, Herrera C, Madhuranthakam AJ, Twickler DM, Fei B. Deep learning-based segmentation of the placenta and uterus on MR images. J Med Imaging (Bellingham) 2021; 8:054001. [PMID: 34589556 PMCID: PMC8463933 DOI: 10.1117/1.jmi.8.5.054001] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Accepted: 09/02/2021] [Indexed: 12/14/2022] Open
Abstract
Purpose: Magnetic resonance imaging has been recently used to examine the abnormalities of the placenta during pregnancy. Segmentation of the placenta and uterine cavity allows quantitative measures and further analyses of the organs. The objective of this study is to develop a segmentation method with minimal user interaction. Approach: We developed a fully convolutional neural network (CNN) for simultaneous segmentation of the uterine cavity and placenta in three dimensions (3D) while a minimal operator interaction was incorporated for training and testing of the network. The user interaction guided the network to localize the placenta more accurately. In the experiments, we trained two CNNs, one using 70 normal training cases and the other using 129 training cases including normal cases as well as cases with suspected placenta accreta spectrum (PAS). We evaluated the performance of the segmentation algorithms on two test sets: one with 20 normal cases and the other with 50 images from both normal women and women with suspected PAS. Results: For the normal test data, the average Dice similarity coefficient (DSC) was 92% and 82% for the uterine cavity and placenta, respectively. For the combination of normal and abnormal cases, the DSC was 88% and 83% for the uterine cavity and placenta, respectively. The 3D segmentation algorithm estimated the volume of the normal and abnormal uterine cavity and placenta with average volume estimation errors of 4% and 9%, respectively. Conclusions: The deep learning-based segmentation method provides a useful tool for volume estimation and analysis of the placenta and uterus cavity in human placental imaging.
Collapse
Affiliation(s)
- Maysam Shahedi
- University of Texas at Dallas, Department of Bioengineering, Dallas, Texas, United States
| | - Catherine Y. Spong
- University of Texas Southwestern Medical Center, Department of Obstetrics and Gynecology, Dallas, Texas, United States
| | - James D. Dormer
- University of Texas at Dallas, Department of Bioengineering, Dallas, Texas, United States
| | - Quyen N. Do
- University of Texas Southwestern Medical Center, Department of Radiology, Dallas, Texas, United States
| | - Yin Xi
- University of Texas Southwestern Medical Center, Department of Radiology, Dallas, Texas, United States
- University of Texas Southwestern Medical Center, Department of Clinical Science, Dallas, Texas, United States
| | - Matthew A. Lewis
- University of Texas Southwestern Medical Center, Department of Radiology, Dallas, Texas, United States
| | - Christina Herrera
- University of Texas Southwestern Medical Center, Department of Obstetrics and Gynecology, Dallas, Texas, United States
| | - Ananth J. Madhuranthakam
- University of Texas Southwestern Medical Center, Department of Radiology, Dallas, Texas, United States
- University of Texas Southwestern Medical Center, Advanced Imaging Research Center, Dallas, Texas, United States
| | - Diane M. Twickler
- University of Texas Southwestern Medical Center, Department of Obstetrics and Gynecology, Dallas, Texas, United States
- University of Texas Southwestern Medical Center, Department of Radiology, Dallas, Texas, United States
| | - Baowei Fei
- University of Texas at Dallas, Department of Bioengineering, Dallas, Texas, United States
- University of Texas Southwestern Medical Center, Department of Clinical Science, Dallas, Texas, United States
- University of Texas Southwestern Medical Center, Advanced Imaging Research Center, Dallas, Texas, United States
| |
Collapse
|
6
|
Xi Y, Shahedi M, Do QN, Dormer J, Lewis MA, Fei B, Spong CY, Madhuranthakam AJ, Twickler DM. Assessing reproducibility in Magnetic Resonance (MR) Radiomics features between Deep-Learning segmented and Expert Manual segmented data and evaluating their diagnostic performance in Pregnant Women with suspected Placenta Accreta Spectrum (PAS). PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2021; 11597:115972P. [PMID: 35784397 PMCID: PMC9248910 DOI: 10.1117/12.2581467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
A Deep-Learning (DL) based segmentation tool was applied to a new magnetic resonance imaging dataset of pregnant women with suspected Placenta Accreta Spectrum (PAS). Radiomic features from DL segmentation were compared to those from expert manual segmentation via intraclass correlation coefficients (ICC) to assess reproducibility. An additional imaging marker quantifying the placental location within the uterus (PLU) was included. Features with an ICC > 0.7 were used to build logistic regression models to predict hysterectomy. Of 2059 features, 781 (37.9%) had ICC >0.7. AUC was 0.69 (95% CI 0.63-0.74) for manually segmented data and 0.78 (95% CI 0.73-0.83) for DL segmented data.
Collapse
Affiliation(s)
- Yin Xi
- Department of Radiology, University of Texas Southwestern Medical Center
| | - Maysam Shahedi
- Center for Imaging and Surgical Innovation and Department of Bioengineering, University of Texas at Dallas
| | - Quyen N Do
- Department of Radiology, University of Texas Southwestern Medical Center
| | - James Dormer
- Center for Imaging and Surgical Innovation and Department of Bioengineering, University of Texas at Dallas
| | - Matthew A Lewis
- Department of Radiology, University of Texas Southwestern Medical Center
| | - Baowei Fei
- Department of Radiology, University of Texas Southwestern Medical Center
- Center for Imaging and Surgical Innovation and Department of Bioengineering, University of Texas at Dallas
| | - Catherine Y Spong
- Department of Obstetrics and Gynecology, University of Texas Southwestern Medical Center
| | | | - Diane M Twickler
- Department of Radiology, University of Texas Southwestern Medical Center
| |
Collapse
|
7
|
Kan CNE, Gilat-Schmidt T, Ye DH. Enhancing Reproductive Organ Segmentation in Pediatric CT via Adversarial Learning. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2021; 11596:1159612. [PMID: 33994628 PMCID: PMC8122493 DOI: 10.1117/12.2582127] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Accurately segmenting organs in abdominal computed tomography (CT) scans is crucial for clinical applications such as pre-operative planning and dose estimation. With the recent advent of deep learning algorithms, many robust frameworks have been proposed for organ segmentation in abdominal CT images. However, many of these frameworks require large amounts of training data in order to achieve high segmentation accuracy. Pediatric abdominal CT images containing reproductive organs are particularly hard to obtain since these organs are extremely sensitive to ionizing radiation. Hence, it is extremely challenging to train automatic segmentation algorithms on organs such as the uterus and the prostate. To address these issues, we propose a novel segmentation network with a built-in auxiliary classifier generative adversarial network (ACGAN) that conditionally generates additional features during training. The proposed CFG-SegNet (conditional feature generation segmentation network) is trained on a single loss function which combines adversarial loss, reconstruction loss, auxiliary classifier loss and segmentation loss. 2.5D segmentation experiments are performed on a custom data set containing 24 female CT volumes containing the uterus and 40 male CT volumes containing the prostate. CFG-SegNet achieves an average segmentation accuracy of 0.929 DSC (Dice Similarity Coefficient) on the prostate and 0.724 DSC on the uterus with 4-fold cross validation. The results show that our network is high-performing and has the potential to precisely segment difficult organs with few available training images.
Collapse
Affiliation(s)
- Chi Nok Enoch Kan
- Department of Electrical and Computer Engineering, Marquette University, Milwaukee, USA
| | - Taly Gilat-Schmidt
- Department of Electrical and Computer Engineering, Marquette University, Milwaukee, USA
| | - Dong Hye Ye
- Department of Electrical and Computer Engineering, Marquette University, Milwaukee, USA
| |
Collapse
|