1
|
Yin Y, Clark AR, Collins SL. 3D Single Vessel Fractional Moving Blood Volume (3D-svFMBV): Fully Automated Tissue Perfusion Estimation Using Ultrasound. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2707-2717. [PMID: 38478454 DOI: 10.1109/tmi.2024.3376668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/02/2024]
Abstract
Power Doppler ultrasound (PD-US) is the ideal modality to assess tissue perfusion as it is cheap, patient-friendly and does not require ionizing radiation. However, meaningful inter-patient comparison only occurs if differences in tissue-attenuation are corrected for. This can be done by standardizing the PD-US signal to a blood vessel assumed to have 100% vascularity. The original method to do this is called fractional moving blood volume (FMBV). We describe a novel, fully-automated method combining image processing, numerical modelling, and deep learning to estimate three-dimensional single vessel fractional moving blood volume (3D-svFMBV). We map the PD signals to a characteristic intensity profile within a single large vessel to define the standardization value at the high shear vessel margins. This removes the need for mathematical correction for background signal which can introduce error. The 3D-svFMBV was first tested on synthetic images generated using the characteristics of uterine artery and physiological ultrasound noise levels, demonstrating prediction of standardization value close to the theoretical ideal. Clinical utility was explored using 143 first-trimester placental ultrasound volumes. More biologically plausible perfusion estimates were obtained, showing improved prediction of pre-eclampsia compared with those generated with the semi-automated original 3D-FMBV technique. The proposed 3D-svFMBV method overcomes the limitations of the original technique to provide accurate and robust placental perfusion estimation. This not only has the potential to provide an early pregnancy screening tool but may also be used to assess perfusion of different organs and tumors.
Collapse
|
2
|
Jost E, Kosian P, Jimenez Cruz J, Albarqouni S, Gembruch U, Strizek B, Recker F. Evolving the Era of 5D Ultrasound? A Systematic Literature Review on the Applications for Artificial Intelligence Ultrasound Imaging in Obstetrics and Gynecology. J Clin Med 2023; 12:6833. [PMID: 37959298 PMCID: PMC10649694 DOI: 10.3390/jcm12216833] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 10/17/2023] [Accepted: 10/25/2023] [Indexed: 11/15/2023] Open
Abstract
Artificial intelligence (AI) has gained prominence in medical imaging, particularly in obstetrics and gynecology (OB/GYN), where ultrasound (US) is the preferred method. It is considered cost effective and easily accessible but is time consuming and hindered by the need for specialized training. To overcome these limitations, AI models have been proposed for automated plane acquisition, anatomical measurements, and pathology detection. This study aims to overview recent literature on AI applications in OB/GYN US imaging, highlighting their benefits and limitations. For the methodology, a systematic literature search was performed in the PubMed and Cochrane Library databases. Matching abstracts were screened based on the PICOS (Participants, Intervention or Exposure, Comparison, Outcome, Study type) scheme. Articles with full text copies were distributed to the sections of OB/GYN and their research topics. As a result, this review includes 189 articles published from 1994 to 2023. Among these, 148 focus on obstetrics and 41 on gynecology. AI-assisted US applications span fetal biometry, echocardiography, or neurosonography, as well as the identification of adnexal and breast masses, and assessment of the endometrium and pelvic floor. To conclude, the applications for AI-assisted US in OB/GYN are abundant, especially in the subspecialty of obstetrics. However, while most studies focus on common application fields such as fetal biometry, this review outlines emerging and still experimental fields to promote further research.
Collapse
Affiliation(s)
- Elena Jost
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Philipp Kosian
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Jorge Jimenez Cruz
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Shadi Albarqouni
- Department of Diagnostic and Interventional Radiology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
- Helmholtz AI, Helmholtz Munich, Ingolstädter Landstraße 1, 85764 Neuherberg, Germany
| | - Ulrich Gembruch
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Brigitte Strizek
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Florian Recker
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| |
Collapse
|
3
|
Herrera CL, Kim MJ, Do QN, Owen DM, Fei B, Twickler DM, Spong CY. The human placenta project: Funded studies, imaging technologies, and future directions. Placenta 2023; 142:27-35. [PMID: 37634371 DOI: 10.1016/j.placenta.2023.08.067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Revised: 08/16/2023] [Accepted: 08/19/2023] [Indexed: 08/29/2023]
Abstract
The placenta plays a critical role in fetal development. It serves as a multi-functional organ that protects and nurtures the fetus during pregnancy. However, despite its importance, the intricacies of placental structure and function in normal and diseased states have remained largely unexplored. Thus, in 2014, the National Institute of Child Health and Human Development launched the Human Placenta Project (HPP). As of May 2023, the HPP has awarded over $101 million in research funds, resulting in 41 funded studies and 459 publications. We conducted a comprehensive review of these studies and publications to identify areas of funded research, advances in those areas, limitations of current research, and continued areas of need. This paper will specifically review the funded studies by the HPP, followed by an in-depth discussion on advances and gaps within placental-focused imaging. We highlight the progress within magnetic reasonance imaging and ultrasound, including development of tools for the assessment of placental function and structure.
Collapse
Affiliation(s)
- Christina L Herrera
- Department of Obstetrics and Gynecology, UT Southwestern Medical Center, and Parkland Health Dallas, Texas, USA; Green Center for Reproductive Biology Sciences, UT Southwestern Medical Center, Dallas, TX, USA.
| | - Meredith J Kim
- University of Texas Southwestern Medical School, Dallas, TX, USA
| | - Quyen N Do
- Department of Radiology, UT Southwestern Medical Center, Dallas, TX, USA
| | - David M Owen
- Department of Obstetrics and Gynecology, UT Southwestern Medical Center, and Parkland Health Dallas, Texas, USA; Green Center for Reproductive Biology Sciences, UT Southwestern Medical Center, Dallas, TX, USA
| | - Baowei Fei
- Department of Radiology, UT Southwestern Medical Center, Dallas, TX, USA; Advanced Imaging Research Center, UT Southwestern Medical Center, Dallas, TX, USA; Department of Bioengineering, University of Texas at Dallas, Dallas, TX, USA
| | - Diane M Twickler
- Department of Obstetrics and Gynecology, UT Southwestern Medical Center, and Parkland Health Dallas, Texas, USA; Department of Radiology, UT Southwestern Medical Center, Dallas, TX, USA
| | - Catherine Y Spong
- Department of Obstetrics and Gynecology, UT Southwestern Medical Center, and Parkland Health Dallas, Texas, USA
| |
Collapse
|
4
|
Goudarzi S, Whyte J, Boily M, Towers A, Kilgour RD, Rivaz H. Segmentation of Arm Ultrasound Images in Breast Cancer-Related Lymphedema: A Database and Deep Learning Algorithm. IEEE Trans Biomed Eng 2023; 70:2552-2563. [PMID: 37028332 DOI: 10.1109/tbme.2023.3253646] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/09/2023]
Abstract
OBJECTIVE Breast cancer treatment often causes the removal of or damage to lymph nodes of the patient's lymphatic drainage system. This side effect is the origin of Breast Cancer-Related Lymphedema (BCRL), referring to a noticeable increase in excess arm volume. Ultrasound imaging is a preferred modality for the diagnosis and progression monitoring of BCRL because of its low cost, safety, and portability. As the affected and unaffected arms look similar in B-mode ultrasound images, the thickness of the skin, subcutaneous fat, and muscle have been shown to be important biomarkers for this task. The segmentation masks are also helpful in monitoring the longitudinal changes in morphology and mechanical properties of tissue layers. METHODS For the first time, a publicly available ultrasound dataset containing the Radio-Frequency (RF) data of 39 subjects and manual segmentation masks by two experts, are provided. Inter- and intra-observer reproducibility studies performed on the segmentation maps show a high Dice Score Coefficient (DSC) of 0.94±0.08 and 0.92±0.06, respectively. Gated Shape Convolutional Neural Network (GSCNN) is modified for precise automatic segmentation of tissue layers, and its generalization performance is improved by the CutMix augmentation strategy. RESULTS We got an average DSC of 0.87±0.11 on the test set, which confirms the high performance of the method. CONCLUSION Automatic segmentation can pave the way for convenient and accessible staging of BCRL, and our dataset can facilitate development and validation of those methods. SIGNIFICANCE Timely diagnosis and treatment of BCRL have crucial importance in preventing irreversible damage.
Collapse
|
5
|
Costanzo A, Ertl-Wagner B, Sussman D. AFNet Algorithm for Automatic Amniotic Fluid Segmentation from Fetal MRI. Bioengineering (Basel) 2023; 10:783. [PMID: 37508809 PMCID: PMC10376488 DOI: 10.3390/bioengineering10070783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Revised: 06/25/2023] [Accepted: 06/27/2023] [Indexed: 07/30/2023] Open
Abstract
Amniotic Fluid Volume (AFV) is a crucial fetal biomarker when diagnosing specific fetal abnormalities. This study proposes a novel Convolutional Neural Network (CNN) model, AFNet, for segmenting amniotic fluid (AF) to facilitate clinical AFV evaluation. AFNet was trained and tested on a manually segmented and radiologist-validated AF dataset. AFNet outperforms ResUNet++ by using efficient feature mapping in the attention block and transposing convolutions in the decoder. Our experimental results show that AFNet achieved a mean Intersection over Union (mIoU) of 93.38% on our dataset, thereby outperforming other state-of-the-art models. While AFNet achieves performance scores similar to those of the UNet++ model, it does so while utilizing merely less than half the number of parameters. By creating a detailed AF dataset with an improved CNN architecture, we enable the quantification of AFV in clinical practice, which can aid in diagnosing AF disorders during gestation.
Collapse
Affiliation(s)
- Alejo Costanzo
- Department of Electrical, Computer and Biomedical Engineering, Faculty of Engineering and Architectural Sciences, Toronto Metropolitan University, Toronto, ON M5B 2K3, Canada
- Institute for Biomedical Engineering, Science and Technology (iBEST), Toronto Metropolitan University and St. Michael's Hospital, Toronto, ON M5B 1T8, Canada
| | - Birgit Ertl-Wagner
- Department of Diagnostic Imaging, The Hospital for Sick Children, Toronto, ON M5G 1X8, Canada
- Department of Medical Imaging, Faculty of Medicine, University of Toronto, Toronto, ON M5T 1W7, Canada
| | - Dafna Sussman
- Department of Electrical, Computer and Biomedical Engineering, Faculty of Engineering and Architectural Sciences, Toronto Metropolitan University, Toronto, ON M5B 2K3, Canada
- Institute for Biomedical Engineering, Science and Technology (iBEST), Toronto Metropolitan University and St. Michael's Hospital, Toronto, ON M5B 1T8, Canada
- Department of Obstetrics and Gynecology, Faculty of Medicine, University of Toronto, Toronto, ON M5G 1E2, Canada
| |
Collapse
|
6
|
Andreasen LA, Feragen A, Christensen AN, Thybo JK, Svendsen MBS, Zepf K, Lekadir K, Tolsgaard MG. Multi-centre deep learning for placenta segmentation in obstetric ultrasound with multi-observer and cross-country generalization. Sci Rep 2023; 13:2221. [PMID: 36755050 PMCID: PMC9908915 DOI: 10.1038/s41598-023-29105-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2022] [Accepted: 01/30/2023] [Indexed: 02/10/2023] Open
Abstract
The placenta is crucial to fetal well-being and it plays a significant role in the pathogenesis of hypertensive pregnancy disorders. Moreover, a timely diagnosis of placenta previa may save lives. Ultrasound is the primary imaging modality in pregnancy, but high-quality imaging depends on the access to equipment and staff, which is not possible in all settings. Convolutional neural networks may help standardize the acquisition of images for fetal diagnostics. Our aim was to develop a deep learning based model for classification and segmentation of the placenta in ultrasound images. We trained a model based on manual annotations of 7,500 ultrasound images to identify and segment the placenta. The model's performance was compared to annotations made by 25 clinicians (experts, trainees, midwives). The overall image classification accuracy was 81%. The average intersection over union score (IoU) reached 0.78. The model's accuracy was lower than experts' and trainees', but it outperformed all clinicians at delineating the placenta, IoU = 0.75 vs 0.69, 0.66, 0.59. The model was cross validated on 100 2nd trimester images from Barcelona, yielding an accuracy of 76%, IoU 0.68. In conclusion, we developed a model for automatic classification and segmentation of the placenta with consistent performance across different patient populations. It may be used for automated detection of placenta previa and enable future deep learning research in placental dysfunction.
Collapse
Affiliation(s)
- Lisbeth Anita Andreasen
- Copenhagen Academy for Medical Education and Simulation (CAMES) Rigshospitalet, Copenhagen, Denmark.
| | - Aasa Feragen
- Technical University of Denmark (DTU) Compute, Lyngby, Denmark
| | | | | | - Morten Bo S Svendsen
- Copenhagen Academy for Medical Education and Simulation (CAMES) Rigshospitalet, Copenhagen, Denmark
| | - Kilian Zepf
- Technical University of Denmark (DTU) Compute, Lyngby, Denmark
| | - Karim Lekadir
- Artificial Intelligence in Medicine Lab (BCN-AIM), Universitat de Barcelona, Barcelona, Spain
| | - Martin Grønnebæk Tolsgaard
- Copenhagen Academy for Medical Education and Simulation (CAMES) Rigshospitalet, Copenhagen, Denmark.,Department of Fetal Medicine, Copenhagen University Hospital Rigshospitalet, Copenhagen, Denmark
| |
Collapse
|
7
|
Huang J, Do QN, Shahed M, Xi Y, Lewis MA, Herrera CL, Owen D, Spong CY, Madhuranthakam AJ, Twickler DM, Fei B. Deep learning based automatic segmentation of the placenta and uterine cavity on prenatal MR images. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2023; 12465:124650N. [PMID: 38486806 PMCID: PMC10937245 DOI: 10.1117/12.2653659] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/17/2024]
Abstract
Magnetic resonance imaging (MRI) has potential benefits in understanding fetal and placental complications in pregnancy. An accurate segmentation of the uterine cavity and placenta can help facilitate fast and automated analyses of placenta accreta spectrum and other pregnancy complications. In this study, we trained a deep neural network for fully automatic segmentation of the uterine cavity and placenta from MR images of pregnant women with and without placental abnormalities. The two datasets were axial MRI data of 241 pregnant women, among whom, 101 patients also had sagittal MRI data. Our trained model was able to perform fully automatic 3D segmentation of MR image volumes and achieved an average Dice similarity coefficient (DSC) of 92% for uterine cavity and of 82% for placenta on the sagittal dataset and an average DSC of 87% for uterine cavity and of 82% for placenta on the axial dataset. Use of our automatic segmentation method is the first step in designing an analytics tool for to assess the risk of pregnant women with placenta accreta spectrum.
Collapse
Affiliation(s)
- James Huang
- Department of Bioengineering, The University of Texas at Dallas, TX
- Center for Imaging and Surgical Innovation, The University of Texas at Dallas, TX
| | - Quyen N. Do
- Department of Radiology, The University of Texas Southwestern Medical Center, Dallas, TX
| | - Maysam Shahed
- Department of Bioengineering, The University of Texas at Dallas, TX
- Center for Imaging and Surgical Innovation, The University of Texas at Dallas, TX
| | - Yin Xi
- Department of Radiology, The University of Texas Southwestern Medical Center, Dallas, TX
- Department of Population and Data Sciences, The University of Texas Southwestern Medical Center, Dallas, TX
| | - Matthew A. Lewis
- Department of Radiology, The University of Texas Southwestern Medical Center, Dallas, TX
| | - Christina L. Herrera
- Department of Obstetrics and Gynecology, The University of Texas Southwestern Medical Center, Dallas, TX
| | - David Owen
- Department of Obstetrics and Gynecology, The University of Texas Southwestern Medical Center, Dallas, TX
| | - Catherine Y. Spong
- Department of Obstetrics and Gynecology, The University of Texas Southwestern Medical Center, Dallas, TX
| | | | - Diane M. Twickler
- Department of Radiology, The University of Texas Southwestern Medical Center, Dallas, TX
- Department of Obstetrics and Gynecology, The University of Texas Southwestern Medical Center, Dallas, TX
| | - Baowei Fei
- Department of Bioengineering, The University of Texas at Dallas, TX
- Center for Imaging and Surgical Innovation, The University of Texas at Dallas, TX
- Department of Radiology, The University of Texas Southwestern Medical Center, Dallas, TX
| |
Collapse
|
8
|
Gleed AD, Chen Q, Jackman J, Mishra D, Chandramohan V, Self A, Bhatnagar S, Papageorghiou AT, Noble JA. Automatic Image Guidance for Assessment of Placenta Location in Ultrasound Video Sweeps. ULTRASOUND IN MEDICINE & BIOLOGY 2023; 49:106-121. [PMID: 36241588 DOI: 10.1016/j.ultrasmedbio.2022.08.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Revised: 06/06/2022] [Accepted: 08/03/2022] [Indexed: 06/16/2023]
Abstract
Ultrasound-based assistive tools are aimed at reducing the high skill needed to interpret a scan by providing automatic image guidance. This may encourage uptake of ultrasound (US) clinical assessments in rural settings in low- and middle-income countries (LMICs), where well-trained sonographers can be scarce. This paper describes a new method that automatically generates an assistive video overlay to provide image guidance to a user to assess placenta location. The user captures US video by following a sweep protocol that scans a U-shape on the lower maternal abdomen. The sweep trajectory is simple and easy to learn. We initially explore a 2-D embedding of placenta shapes, mapping manually segmented placentas in US video frames to a 2-D space. We map 2013 frames from 11 videos. This provides insight into the spectrum of placenta shapes that appear when using the sweep protocol. We propose classification of the placenta shapes from three observed clusters: complex, tip and rectangular. We use this insight to design an effective automatic segmentation algorithm, combining a U-Net with a CRF-RNN module to enhance segmentation performance with respect to placenta shape. The U-Net + CRF-RNN algorithm automatically segments the placenta and maternal bladder. We assess segmentation performance using both area and shape metrics. We report results comparable to the state-of-the-art for automatic placenta segmentation on the Dice metric, achieving 0.83 ± 0.15 evaluated on 2127 frames from 10 videos. We also qualitatively evaluate 78,308 frames from 135 videos, assessing if the anatomical outline is correctly segmented. We found that addition of the CRF-RNN improves over a baseline U-Net when faced with a complex placenta shape, which we observe in our 2-D embedding, up to 14% with respect to the percentage shape error. From the segmentations, an assistive video overlay is automatically constructed that (i) highlights the placenta and bladder, (ii) determines the lower placenta edge and highlights this location as a point and (iii) labels a 2-cm clearance on the lower placenta edge. The 2-cm clearance is chosen to satisfy current clinical guidelines. We propose to assess the placenta location by comparing the 2-cm region and the bottom of the bladder, which represents a coarse localization of the cervix. Anatomically, the bladder must sit above the cervix region. We present proof-of-concept results for the video overlay.
Collapse
Affiliation(s)
- Alexander D Gleed
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK.
| | - Qingchao Chen
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| | - James Jackman
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| | - Divyanshu Mishra
- Translational Health Science and Technology Institute, Faridabad, India
| | | | - Alice Self
- Nuffield Department of Women's and Reproductive Health, University of Oxford, Oxford, UK
| | | | - Aris T Papageorghiou
- Nuffield Department of Women's and Reproductive Health, University of Oxford, Oxford, UK
| | - J Alison Noble
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| |
Collapse
|
9
|
Zimmer VA, Gomez A, Skelton E, Wright R, Wheeler G, Deng S, Ghavami N, Lloyd K, Matthew J, Kainz B, Rueckert D, Hajnal JV, Schnabel JA. Placenta segmentation in ultrasound imaging: Addressing sources of uncertainty and limited field-of-view. Med Image Anal 2022; 83:102639. [PMID: 36257132 PMCID: PMC7614009 DOI: 10.1016/j.media.2022.102639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Revised: 03/09/2022] [Accepted: 09/15/2022] [Indexed: 02/04/2023]
Abstract
Automatic segmentation of the placenta in fetal ultrasound (US) is challenging due to the (i) high diversity of placenta appearance, (ii) the restricted quality in US resulting in highly variable reference annotations, and (iii) the limited field-of-view of US prohibiting whole placenta assessment at late gestation. In this work, we address these three challenges with a multi-task learning approach that combines the classification of placental location (e.g., anterior, posterior) and semantic placenta segmentation in a single convolutional neural network. Through the classification task the model can learn from larger and more diverse datasets while improving the accuracy of the segmentation task in particular in limited training set conditions. With this approach we investigate the variability in annotations from multiple raters and show that our automatic segmentations (Dice of 0.86 for anterior and 0.83 for posterior placentas) achieve human-level performance as compared to intra- and inter-observer variability. Lastly, our approach can deliver whole placenta segmentation using a multi-view US acquisition pipeline consisting of three stages: multi-probe image acquisition, image fusion and image segmentation. This results in high quality segmentation of larger structures such as the placenta in US with reduced image artifacts which are beyond the field-of-view of single probes.
Collapse
Affiliation(s)
- Veronika A. Zimmer
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom,Faculty of Informatics, Technical University of Munich, Germany,Corresponding author at: School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom. , (V.A. Zimmer)
| | - Alberto Gomez
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Emily Skelton
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom,School of Health Sciences, City, University of London, London, United Kingdom
| | - Robert Wright
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Gavin Wheeler
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Shujie Deng
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Nooshin Ghavami
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Karen Lloyd
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Jacqueline Matthew
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Bernhard Kainz
- BioMedIA group, Imperial College London, London, United Kingdom,FAU Erlangen-Nürnberg Germany
| | - Daniel Rueckert
- Faculty of Informatics, Technical University of Munich, Germany,BioMedIA group, Imperial College London, London, United Kingdom
| | - Joseph V. Hajnal
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Julia A. Schnabel
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom,Faculty of Informatics, Technical University of Munich, Germany,Helmholtz Center Munich, Germany
| |
Collapse
|
10
|
Alzubaidi M, Agus M, Alyafei K, Althelaya KA, Shah U, Abd-Alrazaq AA, Anbar M, Makhlouf M, Househ M. Towards deep observation: A systematic survey on artificial intelligence techniques to monitor fetus via Ultrasound Images. iScience 2022; 25:104713. [PMID: 35856024 PMCID: PMC9287600 DOI: 10.1016/j.isci.2022.104713] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 06/09/2022] [Accepted: 06/28/2022] [Indexed: 11/26/2022] Open
Abstract
Several reviews have been conducted regarding artificial intelligence (AI) techniques to improve pregnancy outcomes. But they are not focusing on ultrasound images. This survey aims to explore how AI can assist with fetal growth monitoring via ultrasound image. We reported our findings using the guidelines for PRISMA. We conducted a comprehensive search of eight bibliographic databases. Out of 1269 studies 107 are included. We found that 2D ultrasound images were more popular (88) than 3D and 4D ultrasound images (19). Classification is the most used method (42), followed by segmentation (31), classification integrated with segmentation (16) and other miscellaneous methods such as object-detection, regression, and reinforcement learning (18). The most common areas that gained traction within the pregnancy domain were the fetus head (43), fetus body (31), fetus heart (13), fetus abdomen (10), and the fetus face (10). This survey will promote the development of improved AI models for fetal clinical applications. Artificial intelligence studies to monitor fetal development via ultrasound images Fetal issues categorized based on four categories — general, head, heart, face, abdomen The most used AI techniques are classification, segmentation, object detection, and RL The research and practical implications are included.
Collapse
|
11
|
Amniotic Fluid Classification and Artificial Intelligence: Challenges and Opportunities. SENSORS 2022; 22:s22124570. [PMID: 35746352 PMCID: PMC9228529 DOI: 10.3390/s22124570] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Revised: 06/13/2022] [Accepted: 06/14/2022] [Indexed: 12/13/2022]
Abstract
A fetal ultrasound (US) is a technique to examine a baby’s maturity and development. US examinations have varying purposes throughout pregnancy. Consequently, in the second and third trimester, US tests are performed for the assessment of Amniotic Fluid Volume (AFV), a key indicator of fetal health. Disorders resulting from abnormal AFV levels, commonly referred to as oligohydramnios or polyhydramnios, may pose a serious threat to a mother’s or child’s health. This paper attempts to accumulate and compare the most recent advancements in Artificial Intelligence (AI)-based techniques for the diagnosis and classification of AFV levels. Additionally, we provide a thorough and highly inclusive breakdown of other relevant factors that may cause abnormal AFV levels, including, but not limited to, abnormalities in the placenta, kidneys, or central nervous system, as well as other contributors, such as preterm birth or twin-to-twin transfusion syndrome. Furthermore, we bring forth a concise overview of all the Machine Learning (ML) and Deep Learning (DL) techniques, along with the datasets supplied by various researchers. This study also provides a brief rundown of the challenges and opportunities encountered in this field, along with prospective research directions and promising angles to further explore.
Collapse
|
12
|
Sharifzadeh M, Benali H, Rivaz H. Investigating Shift Variance of Convolutional Neural Networks in Ultrasound Image Segmentation. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2022; 69:1703-1713. [PMID: 35344491 DOI: 10.1109/tuffc.2022.3162800] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
While accuracy is an evident criterion for ultrasound image segmentation, output consistency across different tests is equally crucial for tracking changes in regions of interest in applications such as monitoring the patients' response to treatment, measuring the progression or regression of the disease, reaching a diagnosis, or treatment planning. Convolutional neural networks (CNNs) have attracted rapidly growing interest in automatic ultrasound image segmentation recently. However, CNNs are not shift-equivariant, meaning that, if the input translates, e.g., in the lateral direction by one pixel, the output segmentation may drastically change. To the best of our knowledge, this problem has not been studied in ultrasound image segmentation or even more broadly in ultrasound images. Herein, we investigate and quantify the shift-variance problem of CNNs in this application and further evaluate the performance of a recently published technique, called BlurPooling, for addressing the problem. In addition, we propose the Pyramidal BlurPooling method that outperforms BlurPooling in both output consistency and segmentation accuracy. Finally, we demonstrate that data augmentation is not a replacement for the proposed method. Source code is available at http://code.sonography.ai.
Collapse
|
13
|
A 10-Year Retrospective Review of Prenatal Applications, Current Challenges and Future Prospects of Three-Dimensional Sonoangiography. Diagnostics (Basel) 2021; 11:diagnostics11081511. [PMID: 34441444 PMCID: PMC8394388 DOI: 10.3390/diagnostics11081511] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2021] [Revised: 08/14/2021] [Accepted: 08/18/2021] [Indexed: 12/12/2022] Open
Abstract
Realistic reconstruction of angioarchitecture within the morphological landmark with three-dimensional sonoangiography (three-dimensional power Doppler; 3D PD) may augment standard prenatal ultrasound and Doppler assessments. This study aimed to (a) present a technical overview, (b) determine additional advantages, (c) identify current challenges, and (d) predict trajectories of 3D PD for prenatal assessments. PubMed and Scopus databases for the last decade were searched. Although 307 publications addressed our objectives, their heterogeneity was too broad for statistical analyses. Important findings are therefore presented in descriptive format and supplemented with the authors’ 3D PD images. Acquisition, analysis, and display techniques need to be personalized to improve the quality of flow-volume data. While 3D PD indices of the first-trimester placenta may improve the prediction of preeclampsia, research is needed to standardize the measurement protocol. In highly experienced hands, the unique 3D PD findings improve the diagnostic accuracy of placenta accreta spectrum. A lack of quality assurance is the central challenge to incorporating 3D PD in prenatal care. Machine learning may broaden clinical translations of prenatal 3D PD. Due to its operator dependency, 3D PD has low reproducibility. Until standardization and quality assurance protocols are established, its use as a stand-alone clinical or research tool cannot be recommended.
Collapse
|