1
|
Bai J, Zhou Z, Ou Z, Koehler G, Stock R, Maier-Hein K, Elbatel M, Martí R, Li X, Qiu Y, Gou P, Chen G, Zhao L, Zhang J, Dai Y, Wang F, Silvestre G, Curran K, Sun H, Xu J, Cai P, Jiang L, Lan L, Ni D, Zhong M, Chen G, Campello VM, Lu Y, Lekadir K. PSFHS challenge report: Pubic symphysis and fetal head segmentation from intrapartum ultrasound images. Med Image Anal 2024; 99:103353. [PMID: 39340971 DOI: 10.1016/j.media.2024.103353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2024] [Revised: 09/13/2024] [Accepted: 09/16/2024] [Indexed: 09/30/2024]
Abstract
Segmentation of the fetal and maternal structures, particularly intrapartum ultrasound imaging as advocated by the International Society of Ultrasound in Obstetrics and Gynecology (ISUOG) for monitoring labor progression, is a crucial first step for quantitative diagnosis and clinical decision-making. This requires specialized analysis by obstetrics professionals, in a task that i) is highly time- and cost-consuming and ii) often yields inconsistent results. The utility of automatic segmentation algorithms for biometry has been proven, though existing results remain suboptimal. To push forward advancements in this area, the Grand Challenge on Pubic Symphysis-Fetal Head Segmentation (PSFHS) was held alongside the 26th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2023). This challenge aimed to enhance the development of automatic segmentation algorithms at an international scale, providing the largest dataset to date with 5,101 intrapartum ultrasound images collected from two ultrasound machines across three hospitals from two institutions. The scientific community's enthusiastic participation led to the selection of the top 8 out of 179 entries from 193 registrants in the initial phase to proceed to the competition's second stage. These algorithms have elevated the state-of-the-art in automatic PSFHS from intrapartum ultrasound images. A thorough analysis of the results pinpointed ongoing challenges in the field and outlined recommendations for future work. The top solutions and the complete dataset remain publicly available, fostering further advancements in automatic segmentation and biometry for intrapartum ultrasound imaging.
Collapse
Affiliation(s)
- Jieyun Bai
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Informatization, Jinan University, Guangzhou, China; Auckland Bioengineering Institute, The University of Auckland, Private Bag 92019, Auckland 1142, New Zealand.
| | - Zihao Zhou
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Informatization, Jinan University, Guangzhou, China
| | - Zhanhong Ou
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Informatization, Jinan University, Guangzhou, China
| | - Gregor Koehler
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Raphael Stock
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Klaus Maier-Hein
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Marawan Elbatel
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hongkong, China
| | - Robert Martí
- Computer Vision and Robotics Group, University of Girona, Girona, Spain
| | - Xiaomeng Li
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hongkong, China
| | - Yaoyang Qiu
- Canon Medical Systems (China) Co., LTD, Beijing, China
| | - Panjie Gou
- Canon Medical Systems (China) Co., LTD, Beijing, China
| | - Gongping Chen
- College of Artificial Intelligence, Nankai University, Tianjin, China
| | - Lei Zhao
- College of Computer Science and Electronic Engineering, Hunan University, Changsha, China
| | - Jianxun Zhang
- College of Artificial Intelligence, Nankai University, Tianjin, China
| | - Yu Dai
- College of Artificial Intelligence, Nankai University, Tianjin, China
| | - Fangyijie Wang
- School of Medicine, University College Dublin, Dublin, Ireland
| | | | - Kathleen Curran
- School of Computer Science, University College Dublin, Dublin, Ireland
| | - Hongkun Sun
- School of Statistics & Mathematics, Zhejiang Gongshang University, Hangzhou, China
| | - Jing Xu
- School of Statistics & Mathematics, Zhejiang Gongshang University, Hangzhou, China
| | - Pengzhou Cai
- School of Computer Science & Engineering, Chongqing University of Technology, Chongqing, China
| | - Lu Jiang
- School of Computer Science & Engineering, Chongqing University of Technology, Chongqing, China
| | - Libin Lan
- School of Computer Science & Engineering, Chongqing University of Technology, Chongqing, China
| | - Dong Ni
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound & Guangdong Provincial Key Laboratory of Biomedical Measurements and Ultrasound Imaging & School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Mei Zhong
- NanFang Hospital of Southern Medical University, Guangzhou, China
| | - Gaowen Chen
- Zhujiang Hospital of Southern Medical University, Guangzhou, China
| | - Víctor M Campello
- Departament de Matemàtiques i Informàtica, Universitat de Barcelona, Barcelona, Spain
| | - Yaosheng Lu
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Informatization, Jinan University, Guangzhou, China
| | - Karim Lekadir
- Departament de Matemàtiques i Informàtica, Universitat de Barcelona, Barcelona, Spain; Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain
| |
Collapse
|
2
|
Wei W, Xu J, Xia F, Liu J, Zhang Z, Wu J, Wei T, Feng H, Ma Q, Jiang F, Zhu X, Zhang X. Deep learning-assisted diagnosis of benign and malignant parotid gland tumors based on automatic segmentation of ultrasound images: a multicenter retrospective study. Front Oncol 2024; 14:1417330. [PMID: 39184051 PMCID: PMC11341398 DOI: 10.3389/fonc.2024.1417330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2024] [Accepted: 07/18/2024] [Indexed: 08/27/2024] Open
Abstract
Objectives To construct deep learning-assisted diagnosis models based on automatic segmentation of ultrasound images to facilitate radiologists in differentiating benign and malignant parotid tumors. Methods A total of 582 patients histopathologically diagnosed with PGTs were retrospectively recruited from 4 centers, and their data were collected for analysis. The radiomics features of six deep learning models (ResNet18, Inception_v3 etc) were analyzed based on the ultrasound images that were obtained under the best automatic segmentation model (Deeplabv3, UNet++, and UNet). The performance of three physicians was compared when the optimal model was used and not. The Net Reclassification Index (NRI) and Integrated Discrimination Improvement (IDI) were utilized to evaluate the clinical benefit of the optimal model. Results The Deeplabv3 model performed optimally in terms of automatic segmentation. The ResNet18 deep learning model had the best prediction performance, with an area under the receiver-operating characteristic curve of 0.808 (0.694-0.923), 0.809 (0.712-0.906), and 0.812 (0.680-0.944) in the internal test set and external test sets 1 and 2, respectively. Meanwhile, the optimal model-assisted clinical and overall benefits were markedly enhanced for two out of three radiologists (in internal validation set, NRI: 0.259 and 0.213 [p = 0.002 and 0.017], IDI: 0.284 and 0.201 [p = 0.005 and 0.043], respectively; in external test set 1, NRI: 0.183 and 0.161 [p = 0.019 and 0.008], IDI: 0.205 and 0.184 [p = 0.031 and 0.045], respectively; in external test set 2, NRI: 0.297 and 0.297 [p = 0.038 and 0.047], IDI: 0.332 and 0.294 [p = 0.031 and 0.041], respectively). Conclusions The deep learning model constructed for automatic segmentation of ultrasound images can improve the diagnostic performance of radiologists for PGTs.
Collapse
Affiliation(s)
- Wei Wei
- Department of Ultrasound, The First Affiliated Hospital of Wannan Medical College (Yijishan Hospital), Wuhu, China
| | - Jingya Xu
- Department of Radiology, The First Affiliated Hospital of Wannan Medical College (Yijishan Hospital), Wuhu, China
| | - Fei Xia
- Department of Ultrasound, WuHu Hospital, East China Normal University (The Second People’s Hospital, WuHu), Wuhu, Anhui, China
| | - Jun Liu
- Department of Ultrasound, Linyi Central Hospital, Linyi, Shandong, China
| | - Zekai Zhang
- Department of Ultrasound, Zibo Central Hospital, Zibo, Shandong, China
| | - Jing Wu
- Department of Ultrasound, The First Affiliated Hospital of Wannan Medical College (Yijishan Hospital), Wuhu, China
| | - Tianjun Wei
- Department of Ultrasound, The First Affiliated Hospital of Wannan Medical College (Yijishan Hospital), Wuhu, China
| | - Huijun Feng
- Department of Ultrasound, The First Affiliated Hospital of Wannan Medical College (Yijishan Hospital), Wuhu, China
| | - Qiang Ma
- Department of Ultrasound, The First Affiliated Hospital of Wannan Medical College (Yijishan Hospital), Wuhu, China
| | - Feng Jiang
- Department of Ultrasound, The First Affiliated Hospital of Wannan Medical College (Yijishan Hospital), Wuhu, China
| | - Xiangming Zhu
- Department of Ultrasound, The First Affiliated Hospital of Wannan Medical College (Yijishan Hospital), Wuhu, China
| | - Xia Zhang
- Department of Ultrasound, The First Affiliated Hospital of Wannan Medical College (Yijishan Hospital), Wuhu, China
| |
Collapse
|
3
|
Liang B, Peng F, Luo D, Zeng Q, Wen H, Zheng B, Zou Z, An L, Wen H, Wen X, Liao Y, Yuan Y, Li S. Automatic segmentation of 15 critical anatomical labels and measurements of cardiac axis and cardiothoracic ratio in fetal four chambers using nnU-NetV2. BMC Med Inform Decis Mak 2024; 24:128. [PMID: 38773456 PMCID: PMC11106923 DOI: 10.1186/s12911-024-02527-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2024] [Accepted: 05/02/2024] [Indexed: 05/23/2024] Open
Abstract
BACKGROUND Accurate segmentation of critical anatomical structures in fetal four-chamber view images is essential for the early detection of congenital heart defects. Current prenatal screening methods rely on manual measurements, which are time-consuming and prone to inter-observer variability. This study develops an AI-based model using the state-of-the-art nnU-NetV2 architecture for automatic segmentation and measurement of key anatomical structures in fetal four-chamber view images. METHODS A dataset, consisting of 1,083 high-quality fetal four-chamber view images, was annotated with 15 critical anatomical labels and divided into training/validation (867 images) and test (216 images) sets. An AI-based model using the nnU-NetV2 architecture was trained on the annotated images and evaluated using the mean Dice coefficient (mDice) and mean intersection over union (mIoU) metrics. The model's performance in automatically computing the cardiac axis (CAx) and cardiothoracic ratio (CTR) was compared with measurements from sonographers with varying levels of experience. RESULTS The AI-based model achieved a mDice coefficient of 87.11% and an mIoU of 77.68% for the segmentation of critical anatomical structures. The model's automated CAx and CTR measurements showed strong agreement with those of experienced sonographers, with respective intraclass correlation coefficients (ICCs) of 0.83 and 0.81. Bland-Altman analysis further confirmed the high agreement between the model and experienced sonographers. CONCLUSION We developed an AI-based model using the nnU-NetV2 architecture for accurate segmentation and automated measurement of critical anatomical structures in fetal four-chamber view images. Our model demonstrated high segmentation accuracy and strong agreement with experienced sonographers in computing clinically relevant parameters. This approach has the potential to improve the efficiency and reliability of prenatal cardiac screening, ultimately contributing to the early detection of congenital heart defects.
Collapse
Affiliation(s)
- Bocheng Liang
- Department of Ultrasound, Shenzhen Maternity&Child Healthcare Hospital, Shenzhen, 518028, China
| | - Fengfeng Peng
- Department of Computer Science and Electronic Engineering, Hunan University, Changsha, 410082, China
| | - Dandan Luo
- Department of Ultrasound, Shenzhen Maternity&Child Healthcare Hospital, Shenzhen, 518028, China
| | - Qing Zeng
- Department of Ultrasound, Shenzhen Maternity&Child Healthcare Hospital, Shenzhen, 518028, China
| | - Huaxuan Wen
- Department of Ultrasound, Shenzhen Maternity&Child Healthcare Hospital, Shenzhen, 518028, China
| | - Bowen Zheng
- Department of Computer Science and Electronic Engineering, Hunan University, Changsha, 410082, China
| | - Zhiying Zou
- Department of Ultrasound, Shenzhen Maternity&Child Healthcare Hospital, Shenzhen, 518028, China
| | - Liting An
- Department of Ultrasound, Shenzhen Maternity&Child Healthcare Hospital, Shenzhen, 518028, China
| | - Huiying Wen
- Department of Ultrasound, Shenzhen Maternity&Child Healthcare Hospital, Shenzhen, 518028, China
| | - Xin Wen
- Department of Ultrasound, Shenzhen Maternity&Child Healthcare Hospital, Shenzhen, 518028, China
| | - Yimei Liao
- Department of Ultrasound, Shenzhen Maternity&Child Healthcare Hospital, Shenzhen, 518028, China
| | - Ying Yuan
- Department of Ultrasound, Shenzhen Maternity&Child Healthcare Hospital, Shenzhen, 518028, China
| | - Shengli Li
- Department of Ultrasound, Shenzhen Maternity&Child Healthcare Hospital, Shenzhen, 518028, China.
| |
Collapse
|
4
|
Belciug S. Autonomous fetal morphology scan: deep learning + clustering merger - the second pair of eyes behind the doctor. BMC Med Inform Decis Mak 2024; 24:102. [PMID: 38641580 PMCID: PMC11027391 DOI: 10.1186/s12911-024-02505-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Accepted: 04/12/2024] [Indexed: 04/21/2024] Open
Abstract
The main cause of fetal death, of infant morbidity or mortality during childhood years is attributed to congenital anomalies. They can be detected through a fetal morphology scan. An experienced sonographer (with more than 2000 performed scans) has the detection rate of congenital anomalies around 52%. The rates go down in the case of a junior sonographer, that has the detection rate of 32.5%. One viable solution to improve these performances is to use Artificial Intelligence. The first step in a fetal morphology scan is represented by the differentiation process between the view planes of the fetus, followed by a segmentation of the internal organs in each view plane. This study presents an Artificial Intelligence empowered decision support system that can label anatomical organs using a merger between deep learning and clustering techniques, followed by an organ segmentation with YOLO8. Our framework was tested on a fetal morphology image dataset that regards the fetal abdomen. The experimental results show that the system can correctly label the view plane and the corresponding organs on real-time ultrasound movies.Trial registrationThe study is registered under the name "Pattern recognition and Anomaly Detection in fetal morphology using Deep Learning and Statistical Learning (PARADISE)", project number 101PCE/2022, project code PN-III-P4-PCE-2021-0057. Trial registration: ClinicalTrials.gov, unique identifying number NCT05738954, date of registration 02.11.2023.
Collapse
Affiliation(s)
- Smaranda Belciug
- Department of Computer Science, Faculty of Sciences, University of Craiova, 200585, Craiova, Romania.
| |
Collapse
|
5
|
Sarker MMK, Singh VK, Alsharid M, Hernandez-Cruz N, Papageorghiou AT, Noble JA. COMFormer: Classification of Maternal-Fetal and Brain Anatomy Using a Residual Cross-Covariance Attention Guided Transformer in Ultrasound. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2023; 70:1417-1427. [PMID: 37665699 DOI: 10.1109/tuffc.2023.3311879] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/06/2023]
Abstract
Monitoring the healthy development of a fetus requires accurate and timely identification of different maternal-fetal structures as they grow. To facilitate this objective in an automated fashion, we propose a deep-learning-based image classification architecture called the COMFormer to classify maternal-fetal and brain anatomical structures present in 2-D fetal ultrasound (US) images. The proposed architecture classifies the two subcategories separately: maternal-fetal (abdomen, brain, femur, thorax, mother's cervix (MC), and others) and brain anatomical structures [trans-thalamic (TT), trans-cerebellum (TC), trans-ventricular (TV), and non-brain (NB)]. Our proposed architecture relies on a transformer-based approach that leverages spatial and global features using a newly designed residual cross-variance attention block. This block introduces an advanced cross-covariance attention (XCA) mechanism to capture a long-range representation from the input using spatial (e.g., shape, texture, intensity) and global features. To build COMFormer, we used a large publicly available dataset (BCNatal) consisting of 12 400 images from 1792 subjects. Experimental results prove that COMFormer outperforms the recent CNN and transformer-based models by achieving 95.64% and 96.33% classification accuracy on maternal-fetal and brain anatomy, respectively.
Collapse
|
6
|
Jost E, Kosian P, Jimenez Cruz J, Albarqouni S, Gembruch U, Strizek B, Recker F. Evolving the Era of 5D Ultrasound? A Systematic Literature Review on the Applications for Artificial Intelligence Ultrasound Imaging in Obstetrics and Gynecology. J Clin Med 2023; 12:6833. [PMID: 37959298 PMCID: PMC10649694 DOI: 10.3390/jcm12216833] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 10/17/2023] [Accepted: 10/25/2023] [Indexed: 11/15/2023] Open
Abstract
Artificial intelligence (AI) has gained prominence in medical imaging, particularly in obstetrics and gynecology (OB/GYN), where ultrasound (US) is the preferred method. It is considered cost effective and easily accessible but is time consuming and hindered by the need for specialized training. To overcome these limitations, AI models have been proposed for automated plane acquisition, anatomical measurements, and pathology detection. This study aims to overview recent literature on AI applications in OB/GYN US imaging, highlighting their benefits and limitations. For the methodology, a systematic literature search was performed in the PubMed and Cochrane Library databases. Matching abstracts were screened based on the PICOS (Participants, Intervention or Exposure, Comparison, Outcome, Study type) scheme. Articles with full text copies were distributed to the sections of OB/GYN and their research topics. As a result, this review includes 189 articles published from 1994 to 2023. Among these, 148 focus on obstetrics and 41 on gynecology. AI-assisted US applications span fetal biometry, echocardiography, or neurosonography, as well as the identification of adnexal and breast masses, and assessment of the endometrium and pelvic floor. To conclude, the applications for AI-assisted US in OB/GYN are abundant, especially in the subspecialty of obstetrics. However, while most studies focus on common application fields such as fetal biometry, this review outlines emerging and still experimental fields to promote further research.
Collapse
Affiliation(s)
- Elena Jost
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Philipp Kosian
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Jorge Jimenez Cruz
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Shadi Albarqouni
- Department of Diagnostic and Interventional Radiology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
- Helmholtz AI, Helmholtz Munich, Ingolstädter Landstraße 1, 85764 Neuherberg, Germany
| | - Ulrich Gembruch
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Brigitte Strizek
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Florian Recker
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| |
Collapse
|
7
|
Jiang Z, Salcudean SE, Navab N. Robotic ultrasound imaging: State-of-the-art and future perspectives. Med Image Anal 2023; 89:102878. [PMID: 37541100 DOI: 10.1016/j.media.2023.102878] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 04/27/2023] [Accepted: 06/22/2023] [Indexed: 08/06/2023]
Abstract
Ultrasound (US) is one of the most widely used modalities for clinical intervention and diagnosis due to the merits of providing non-invasive, radiation-free, and real-time images. However, free-hand US examinations are highly operator-dependent. Robotic US System (RUSS) aims at overcoming this shortcoming by offering reproducibility, while also aiming at improving dexterity, and intelligent anatomy and disease-aware imaging. In addition to enhancing diagnostic outcomes, RUSS also holds the potential to provide medical interventions for populations suffering from the shortage of experienced sonographers. In this paper, we categorize RUSS as teleoperated or autonomous. Regarding teleoperated RUSS, we summarize their technical developments, and clinical evaluations, respectively. This survey then focuses on the review of recent work on autonomous robotic US imaging. We demonstrate that machine learning and artificial intelligence present the key techniques, which enable intelligent patient and process-specific, motion and deformation-aware robotic image acquisition. We also show that the research on artificial intelligence for autonomous RUSS has directed the research community toward understanding and modeling expert sonographers' semantic reasoning and action. Here, we call this process, the recovery of the "language of sonography". This side result of research on autonomous robotic US acquisitions could be considered as valuable and essential as the progress made in the robotic US examination itself. This article will provide both engineers and clinicians with a comprehensive understanding of RUSS by surveying underlying techniques. Additionally, we present the challenges that the scientific community needs to face in the coming years in order to achieve its ultimate goal of developing intelligent robotic sonographer colleagues. These colleagues are expected to be capable of collaborating with human sonographers in dynamic environments to enhance both diagnostic and intraoperative imaging.
Collapse
Affiliation(s)
- Zhongliang Jiang
- Computer Aided Medical Procedures, Technical University of Munich, Munich, Germany.
| | - Septimiu E Salcudean
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC V6T 1Z4, Canada
| | - Nassir Navab
- Computer Aided Medical Procedures, Technical University of Munich, Munich, Germany; Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
8
|
Coronado-Gutiérrez D, Eixarch E, Monterde E, Matas I, Traversi P, Gratacós E, Bonet-Carne E, Burgos-Artizzu XP. Automatic Deep Learning-Based Pipeline for Automatic Delineation and Measurement of Fetal Brain Structures in Routine Mid-Trimester Ultrasound Images. Fetal Diagn Ther 2023; 50:480-490. [PMID: 37573787 DOI: 10.1159/000533203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Accepted: 07/11/2023] [Indexed: 08/15/2023]
Abstract
INTRODUCTION The aim of this study was to develop a pipeline using state-of-the-art deep learning methods to automatically delineate and measure several of the most important brain structures in fetal brain ultrasound (US) images. METHODS The dataset was composed of 5,331 images of the fetal brain acquired during the routine mid-trimester US scan. Our proposed pipeline automatically performs the following three steps: brain plane classification (transventricular, transthalamic, or transcerebellar plane); brain structures delineation (9 different structures); and automatic measurement (from the structure delineations). The methods were trained on a subset of 4,331 images and each step was evaluated on the remaining 1,000 images. RESULTS Plane classification reached 98.6% average class accuracy. Brain structure delineation obtained an average pixel accuracy higher than 96% and a Jaccard index higher than 70%. Automatic measurements get an absolute error below 3.5% for the four standard head biometries (head circumference, biparietal diameter, occipitofrontal diameter, and cephalic index), 9% for transcerebellar diameter, 12% for cavum septi pellucidi ratio, and 26% for Sylvian fissure operculization degree. CONCLUSIONS The proposed pipeline shows the potential of deep learning methods to delineate fetal head and brain structures and obtain automatic measures of each anatomical standard plane acquired during routine fetal US examination.
Collapse
Affiliation(s)
- David Coronado-Gutiérrez
- BCNatal | Fetal Medicine Research Center (Hospital Clínic and Hospital Sant Joan de Déu, University of Barcelona), Barcelona, Spain,
- Transmural Biotech S. L., Barcelona, Spain,
| | - Elisenda Eixarch
- BCNatal | Fetal Medicine Research Center (Hospital Clínic and Hospital Sant Joan de Déu, University of Barcelona), Barcelona, Spain
- Institut d'Investigacions Biomèdiques August Pi I Sunyer (IDIBAPS), Barcelona, Spain
- Centre for Biomedical Research on Rare Diseases (CIBERER), Barcelona, Spain
| | - Elena Monterde
- BCNatal | Fetal Medicine Research Center (Hospital Clínic and Hospital Sant Joan de Déu, University of Barcelona), Barcelona, Spain
| | - Isabel Matas
- BCNatal | Fetal Medicine Research Center (Hospital Clínic and Hospital Sant Joan de Déu, University of Barcelona), Barcelona, Spain
| | - Paola Traversi
- BCNatal | Fetal Medicine Research Center (Hospital Clínic and Hospital Sant Joan de Déu, University of Barcelona), Barcelona, Spain
| | - Eduard Gratacós
- BCNatal | Fetal Medicine Research Center (Hospital Clínic and Hospital Sant Joan de Déu, University of Barcelona), Barcelona, Spain
- Institut d'Investigacions Biomèdiques August Pi I Sunyer (IDIBAPS), Barcelona, Spain
- Centre for Biomedical Research on Rare Diseases (CIBERER), Barcelona, Spain
| | - Elisenda Bonet-Carne
- BCNatal | Fetal Medicine Research Center (Hospital Clínic and Hospital Sant Joan de Déu, University of Barcelona), Barcelona, Spain
- Institut d'Investigacions Biomèdiques August Pi I Sunyer (IDIBAPS), Barcelona, Spain
- Barcelona Tech, Universitat Politècnica de Catalunya, Barcelona, Spain
| | - Xavier P Burgos-Artizzu
- BCNatal | Fetal Medicine Research Center (Hospital Clínic and Hospital Sant Joan de Déu, University of Barcelona), Barcelona, Spain
| |
Collapse
|
9
|
Horgan R, Nehme L, Abuhamad A. Artificial intelligence in obstetric ultrasound: A scoping review. Prenat Diagn 2023; 43:1176-1219. [PMID: 37503802 DOI: 10.1002/pd.6411] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 06/05/2023] [Accepted: 07/17/2023] [Indexed: 07/29/2023]
Abstract
The objective is to summarize the current use of artificial intelligence (AI) in obstetric ultrasound. PubMed, Cochrane Library, and ClinicalTrials.gov databases were searched using the following keywords "neural networks", OR "artificial intelligence", OR "machine learning", OR "deep learning", AND "obstetrics", OR "obstetrical", OR "fetus", OR "foetus", OR "fetal", OR "foetal", OR "pregnancy", or "pregnant", AND "ultrasound" from inception through May 2022. The search was limited to the English language. Studies were eligible for inclusion if they described the use of AI in obstetric ultrasound. Obstetric ultrasound was defined as the process of obtaining ultrasound images of a fetus, amniotic fluid, or placenta. AI was defined as the use of neural networks, machine learning, or deep learning methods. The authors' search identified a total of 127 papers that fulfilled our inclusion criteria. The current uses of AI in obstetric ultrasound include first trimester pregnancy ultrasound, assessment of placenta, fetal biometry, fetal echocardiography, fetal neurosonography, assessment of fetal anatomy, and other uses including assessment of fetal lung maturity and screening for risk of adverse pregnancy outcomes. AI holds the potential to improve the ultrasound efficiency, pregnancy outcomes in low resource settings, detection of congenital malformations and prediction of adverse pregnancy outcomes.
Collapse
Affiliation(s)
- Rebecca Horgan
- Division of Maternal Fetal Medicine, Department of Obstetrics & Gynecology, Eastern Virginia Medical School, Norfolk, Virginia, USA
| | - Lea Nehme
- Division of Maternal Fetal Medicine, Department of Obstetrics & Gynecology, Eastern Virginia Medical School, Norfolk, Virginia, USA
| | - Alfred Abuhamad
- Division of Maternal Fetal Medicine, Department of Obstetrics & Gynecology, Eastern Virginia Medical School, Norfolk, Virginia, USA
| |
Collapse
|
10
|
Gabler E, Nissen M, Altstidl TR, Titzmann A, Packhauser K, Maier A, Fasching PA, Eskofier BM, Leutheuser H. Fetal Re-Identification in Multiple Pregnancy Ultrasound Images Using Deep Learning. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083405 DOI: 10.1109/embc40787.2023.10340336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Ultrasound examinations during pregnancy can detect abnormal fetal development, which is a leading cause of perinatal mortality. In multiple pregnancies, the position of the fetuses may change between examinations. The individual fetus cannot be clearly identified. Fetal re-identification may improve diagnostic capabilities by tracing individual fetal changes. This work evaluates the feasibility of fetal re-identification on FETAL_PLANES_DB, a publicly available dataset of singleton pregnancy ultrasound images. Five dataset subsets with 6,491 images from 1,088 pregnant women and two re-identification frameworks (Torchreid, FastReID) are evaluated. FastReID achieves a mean average precision of 68.77% (68.42%) and mean precision at rank 10 score of 89.60% (95.55%) when trained on images showing the fetal brain (abdomen). Visualization with gradient-weighted class activation mapping shows that the classifiers appear to rely on anatomical features. We conclude that fetal re-identification in ultrasound images may be feasible. However, more work on additional datasets, including images from multiple pregnancies and several subsequent examinations, is required to ensure and investigate performance stability and explainability.Clinical relevance- To date, fetuses in multiple pregnancies cannot be distinguished between ultrasound examinations. This work provides the first evidence for feasibility of fetal re-identification in pregnancy ultrasound images. This may improve diagnostic capabilities in clinical practice in the future, such as longitudinal analysis of fetal changes or abnormalities.
Collapse
|
11
|
Xiao S, Zhang J, Zhu Y, Zhang Z, Cao H, Xie M, Zhang L. Application and Progress of Artificial Intelligence in Fetal Ultrasound. J Clin Med 2023; 12:jcm12093298. [PMID: 37176738 PMCID: PMC10179567 DOI: 10.3390/jcm12093298] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Revised: 04/01/2023] [Accepted: 04/28/2023] [Indexed: 05/15/2023] Open
Abstract
Prenatal ultrasonography is the most crucial imaging modality during pregnancy. However, problems such as high fetal mobility, excessive maternal abdominal wall thickness, and inter-observer variability limit the development of traditional ultrasound in clinical applications. The combination of artificial intelligence (AI) and obstetric ultrasound may help optimize fetal ultrasound examination by shortening the examination time, reducing the physician's workload, and improving diagnostic accuracy. AI has been successfully applied to automatic fetal ultrasound standard plane detection, biometric parameter measurement, and disease diagnosis to facilitate conventional imaging approaches. In this review, we attempt to thoroughly review the applications and advantages of AI in prenatal fetal ultrasound and discuss the challenges and promises of this new field.
Collapse
Affiliation(s)
- Sushan Xiao
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Junmin Zhang
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Ye Zhu
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Zisang Zhang
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Haiyan Cao
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Mingxing Xie
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Li Zhang
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| |
Collapse
|
12
|
Bastiaansen WAP, Klein S, Koning AHJ, Niessen WJ, Steegers-Theunissen RPM, Rousian M. Computational methods for the analysis of early-pregnancy brain ultrasonography: a systematic review. EBioMedicine 2023; 89:104466. [PMID: 36796233 PMCID: PMC9958260 DOI: 10.1016/j.ebiom.2023.104466] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 01/09/2023] [Accepted: 01/23/2023] [Indexed: 02/16/2023] Open
Abstract
BACKGROUND Early screening of the brain is becoming routine clinical practice. Currently, this screening is performed by manual measurements and visual analysis, which is time-consuming and prone to errors. Computational methods may support this screening. Hence, the aim of this systematic review is to gain insight into future research directions needed to bring automated early-pregnancy ultrasound analysis of the human brain to clinical practice. METHODS We searched PubMed (Medline ALL Ovid), EMBASE, Web of Science Core Collection, Cochrane Central Register of Controlled Trials, and Google Scholar, from inception until June 2022. This study is registered in PROSPERO at CRD42020189888. Studies about computational methods for the analysis of human brain ultrasonography acquired before the 20th week of pregnancy were included. The key reported attributes were: level of automation, learning-based or not, the usage of clinical routine data depicting normal and abnormal brain development, public sharing of program source code and data, and analysis of the confounding factors. FINDINGS Our search identified 2575 studies, of which 55 were included. 76% used an automatic method, 62% a learning-based method, 45% used clinical routine data and in addition, for 13% the data depicted abnormal development. None of the studies shared publicly the program source code and only two studies shared the data. Finally, 35% did not analyse the influence of confounding factors. INTERPRETATION Our review showed an interest in automatic, learning-based methods. To bring these methods to clinical practice we recommend that studies: use routine clinical data depicting both normal and abnormal development, make their dataset and program source code publicly available, and be attentive to the influence of confounding factors. Introduction of automated computational methods for early-pregnancy brain ultrasonography will save valuable time during screening, and ultimately lead to better detection, treatment and prevention of neuro-developmental disorders. FUNDING The Erasmus MC Medical Research Advisor Committee (grant number: FB 379283).
Collapse
Affiliation(s)
- Wietske A P Bastiaansen
- Department of Obstetrics and Gynecology, Erasmus MC, University Medical Center, Rotterdam, the Netherlands; Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, University Medical Center, Rotterdam, the Netherlands
| | - Stefan Klein
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, University Medical Center, Rotterdam, the Netherlands
| | - Anton H J Koning
- Department of Pathology, Erasmus MC, University Medical Center, Rotterdam, the Netherlands
| | - Wiro J Niessen
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, University Medical Center, Rotterdam, the Netherlands
| | | | - Melek Rousian
- Department of Obstetrics and Gynecology, Erasmus MC, University Medical Center, Rotterdam, the Netherlands.
| |
Collapse
|
13
|
Balagalla UB, Jayasooriya J, de Alwis C, Subasinghe A. Automated segmentation of standard scanning planes to measure biometric parameters in foetal ultrasound images – a survey. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2023. [DOI: 10.1080/21681163.2023.2179343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
Affiliation(s)
- U. B. Balagalla
- Department of Electrical and Electronic Engineering, University of Sri Jayewardenepura, Nugegoda, Sri Lanka
| | - J.V.D. Jayasooriya
- Department of Electrical and Electronic Engineering, University of Sri Jayewardenepura, Nugegoda, Sri Lanka
| | - C. de Alwis
- Department of Electrical and Electronic Engineering, University of Sri Jayewardenepura, Nugegoda, Sri Lanka
| | - A. Subasinghe
- Department of Electrical and Electronic Engineering, University of Sri Jayewardenepura, Nugegoda, Sri Lanka
| |
Collapse
|
14
|
Sarno L, Neola D, Carbone L, Saccone G, Carlea A, Miceli M, Iorio GG, Mappa I, Rizzo G, Girolamo RD, D'Antonio F, Guida M, Maruotti GM. Use of artificial intelligence in obstetrics: not quite ready for prime time. Am J Obstet Gynecol MFM 2023; 5:100792. [PMID: 36356939 DOI: 10.1016/j.ajogmf.2022.100792] [Citation(s) in RCA: 17] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Revised: 10/18/2022] [Accepted: 10/28/2022] [Indexed: 11/09/2022]
Abstract
Artificial intelligence is finding several applications in healthcare settings. This study aimed to report evidence on the effectiveness of artificial intelligence application in obstetrics. Through a narrative review of literature, we described artificial intelligence use in different obstetrical areas as follows: prenatal diagnosis, fetal heart monitoring, prediction and management of pregnancy-related complications (preeclampsia, preterm birth, gestational diabetes mellitus, and placenta accreta spectrum), and labor. Artificial intelligence seems to be a promising tool to help clinicians in daily clinical activity. The main advantages that emerged from this review are related to the reduction of inter- and intraoperator variability, time reduction of procedures, and improvement of overall diagnostic performance. However, nowadays, the diffusion of these systems in routine clinical practice raises several issues. Reported evidence is still very limited, and further studies are needed to confirm the clinical applicability of artificial intelligence. Moreover, better training of clinicians designed to use these systems should be ensured, and evidence-based guidelines regarding this topic should be produced to enhance the strengths of artificial systems and minimize their limits.
Collapse
Affiliation(s)
- Laura Sarno
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Daniele Neola
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida).
| | - Luigi Carbone
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Gabriele Saccone
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Annunziata Carlea
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Marco Miceli
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida); CEINGE Biotecnologie Avanzate, Naples, Italy (Dr Miceli)
| | - Giuseppe Gabriele Iorio
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Ilenia Mappa
- Division of Maternal Fetal Medicine, Department of Obstetrics and Gynecology, University of Rome Tor Vergata, Rome, Italy (Dr Mappa and Dr Rizzo)
| | - Giuseppe Rizzo
- Division of Maternal Fetal Medicine, Department of Obstetrics and Gynecology, University of Rome Tor Vergata, Rome, Italy (Dr Mappa and Dr Rizzo)
| | - Raffaella Di Girolamo
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Francesco D'Antonio
- Center for Fetal Care and High Risk Pregnancy, Department of Obstetrics and Gynecology, University G. D'Annunzio of Chieti-Pescara, Chieti, Italy (Dr D'Antonio)
| | - Maurizio Guida
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Giuseppe Maria Maruotti
- Gynecology and Obstetrics Unit, Department of Public Health, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Maruotti)
| |
Collapse
|
15
|
Alzubaidi M, Agus M, Shah U, Makhlouf M, Alyafei K, Househ M. Ensemble Transfer Learning for Fetal Head Analysis: From Segmentation to Gestational Age and Weight Prediction. Diagnostics (Basel) 2022; 12:diagnostics12092229. [PMID: 36140628 PMCID: PMC9497941 DOI: 10.3390/diagnostics12092229] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Revised: 08/25/2022] [Accepted: 08/26/2022] [Indexed: 11/16/2022] Open
Abstract
Ultrasound is one of the most commonly used imaging methodologies in obstetrics to monitor the growth of a fetus during the gestation period. Specifically, ultrasound images are routinely utilized to gather fetal information, including body measurements, anatomy structure, fetal movements, and pregnancy complications. Recent developments in artificial intelligence and computer vision provide new methods for the automated analysis of medical images in many domains, including ultrasound images. We present a full end-to-end framework for segmenting, measuring, and estimating fetal gestational age and weight based on two-dimensional ultrasound images of the fetal head. Our segmentation framework is based on the following components: (i) eight segmentation architectures (UNet, UNet Plus, Attention UNet, UNet 3+, TransUNet, FPN, LinkNet, and Deeplabv3) were fine-tuned using lightweight network EffientNetB0, and (ii) a weighted voting method for building an optimized ensemble transfer learning model (ETLM). On top of that, ETLM was used to segment the fetal head and to perform analytic and accurate measurements of circumference and seven other values of the fetal head, which we incorporated into a multiple regression model for predicting the week of gestational age and the estimated fetal weight (EFW). We finally validated the regression model by comparing our result with expert physician and longitudinal references. We evaluated the performance of our framework on the public domain dataset HC18: we obtained 98.53% mean intersection over union (mIoU) as the segmentation accuracy, overcoming the state-of-the-art methods; as measurement accuracy, we obtained a 1.87 mm mean absolute difference (MAD). Finally we obtained a 0.03% mean square error (MSE) in predicting the week of gestational age and 0.05% MSE in predicting EFW.
Collapse
Affiliation(s)
- Mahmood Alzubaidi
- College of Science and Engineering, Hamad Bin Khalifa University, Doha P.O. Box 34110, Qatar
- Correspondence: (M.A.); (M.H.)
| | - Marco Agus
- College of Science and Engineering, Hamad Bin Khalifa University, Doha P.O. Box 34110, Qatar
| | - Uzair Shah
- College of Science and Engineering, Hamad Bin Khalifa University, Doha P.O. Box 34110, Qatar
| | - Michel Makhlouf
- Sidra Medical and Research Center, Sidra Medicine, Doha P.O. Box 26999, Qatar
| | - Khalid Alyafei
- Sidra Medical and Research Center, Sidra Medicine, Doha P.O. Box 26999, Qatar
| | - Mowafa Househ
- College of Science and Engineering, Hamad Bin Khalifa University, Doha P.O. Box 34110, Qatar
- Correspondence: (M.A.); (M.H.)
| |
Collapse
|
16
|
Reddy CD, Van den Eynde J, Kutty S. Artificial intelligence in perinatal diagnosis and management of congenital heart disease. Semin Perinatol 2022; 46:151588. [PMID: 35396036 DOI: 10.1016/j.semperi.2022.151588] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Prenatal diagnosis and management of congenital heart disease (CHD) has progressed substantially in the past few decades. Fetal echocardiography can accurately detect and diagnose approximately 85% of cardiac anomalies. The prenatal diagnosis of CHD results in improved care, with improved risk stratification, perioperative status and survival. However, there is much work to be done. A minority of CHD is actually identified prenatally. This seemingly incongruous gap is due, in part, to diminished recognition of an anomaly even when present in the images and the need for increased training to obtain specialized cardiac views. Artificial intelligence (AI) is a field within computer science that focuses on the development of algorithms that "learn, reason, and self-correct" in a human-like fashion. When applied to fetal echocardiography, AI has the potential to improve image acquisition, image optimization, automated measurements, identification of outliers, classification of diagnoses, and prediction of outcomes. Adoption of AI in the field has been thus far limited by a paucity of data, limited resources to implement new technologies, and legal and ethical concerns. Despite these barriers, recognition of the potential benefits will push us to a future in which AI will become a routine part of clinical practice.
Collapse
Affiliation(s)
- Charitha D Reddy
- Division of Pediatric Cardiology, Stanford University, Palo Alto, CA, USA.
| | - Jef Van den Eynde
- Helen B. Taussig Heart Center, The Johns Hopkins Hospital and School of Medicine, Baltimore, MD, USA; Department of Cardiovascular Sciences, KU Leuven, Leuven, Belgium
| | - Shelby Kutty
- Helen B. Taussig Heart Center, The Johns Hopkins Hospital and School of Medicine, Baltimore, MD, USA
| |
Collapse
|
17
|
Moser F, Huang R, Papież BW, Namburete AIL. BEAN: Brain Extraction and Alignment Network for 3D Fetal Neurosonography. Neuroimage 2022; 258:119341. [PMID: 35654376 DOI: 10.1016/j.neuroimage.2022.119341] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Revised: 04/08/2022] [Accepted: 05/28/2022] [Indexed: 01/18/2023] Open
Abstract
Brain extraction (masking of extra-cerebral tissues) and alignment are fundamental first steps of most neuroimage analysis pipelines. The lack of automated solutions for 3D ultrasound (US) has therefore limited its potential as a neuroimaging modality for studying fetal brain development using routinely acquired scans. In this work, we propose a convolutional neural network (CNN) that accurately and consistently aligns and extracts the fetal brain from minimally pre-processed 3D US scans. Our multi-task CNN, Brain Extraction and Alignment Network (BEAN), consists of two independent branches: 1) a fully-convolutional encoder-decoder branch for brain extraction of unaligned scans, and 2) a two-step regression-based branch for similarity alignment of the brain to a common coordinate space. BEAN was tested on 356 fetal head 3D scans spanning the gestational range of 14 to 30 weeks, significantly outperforming all current alternatives for fetal brain extraction and alignment. BEAN achieved state-of-the-art performance for both tasks, with a mean Dice Similarity Coefficient (DSC) of 0.94 for the brain extraction masks, and a mean DSC of 0.93 for the alignment of the target brain masks. The presented experimental results show that brain structures such as the thalamus, choroid plexus, cavum septum pellucidum, and Sylvian fissure, are consistently aligned throughout the dataset and remain clearly visible when the scans are averaged together. The BEAN implementation and related code can be found under www.github.com/felipemoser/kelluwen.
Collapse
Affiliation(s)
- Felipe Moser
- Oxford Machine Learning in Neuroimaging laboratory, OMNI, Department of Computer Science, University of Oxford, Oxford, UK.
| | - Ruobing Huang
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| | -
- Nuffield Department of Women's and Reproductive Health, John Radcliffe Hospital, University of Oxford, Oxford, UK
| | - Bartłomiej W Papież
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK; Big Data Institute, Li Ka Shing Centre for Health Information and Discovery, University of Oxford, Oxford, UK
| | - Ana I L Namburete
- Oxford Machine Learning in Neuroimaging laboratory, OMNI, Department of Computer Science, University of Oxford, Oxford, UK; Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, United Kingdom
| |
Collapse
|
18
|
Zeng W, Luo J, Cheng J, Lu Y. Efficient fetal ultrasound image segmentation for automatic head circumference measurement using a lightweight deep convolutional neural network. Med Phys 2022; 49:5081-5092. [PMID: 35536111 DOI: 10.1002/mp.15700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Revised: 03/20/2022] [Accepted: 04/24/2022] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Fetal head circumference (HC) is an important biometric parameter that can be used to assess fetal development in obstetric clinical practice. Most of existing methods use deep neural network to accomplish the task of automatic fetal HC measurement from two-dimensional ultrasound images, and some of them achieved relatively high prediction accuracy. However, few of these methods focused on optimizing model efficiency performance. Our purpose is to develop a more efficient approach for this task, which could help doctors measure HC faster and would be more suitable for deployment on devices with scarce computing resources. METHODS In this paper, we present a very lightweight deep convolutional neural network to achieve automatic fetal head segmentation from ultrasound images. By using sequential prediction network architecture, the proposed model could perform much faster inference while maintaining a high prediction accuracy. In addition, we used depthwise separable convolution to replace part of the standard convolution in the network and shrunk the input image to further improve model efficiency. After getting fetal head segmentation results, post-processing, including morphological processing and least-squares ellipse fitting, was applied to obtain the fetal HC. All experiments in this work were performed on a public dataset, HC18, with 999 fetal ultrasound images for training and 335 for testing. The dataset is publicly available on https://hc18.grand-challenge.org/ and the code for our method is also publicly available on https://github.com/ApeMocker/CSM-for-fetal-HC-measurement. RESULTS Our model has only 0.13 million [M] parameters and can achieve an inference speed of 28[ms] per frame on a CPU and 0.194 [ms] per frame on a GPU, which far exceeds all existing deep learning-based models as far as we know. Experimental results showed that the method achieved a mean absolute difference of 1.97 (±1.89) [mm] and a Dice similarity coefficient of 97.61(±1.72) [%] on HC18 test set, which were comparable to the state-of-the-art. CONCLUSION We presented a very lightweight deep learning-based model to realize fast and accurate fetal head segmentation from two-dimensional ultrasound image, which is then used for calculating the fetal HC. The proposed method could help obstetricians measure the fetal head circumference more efficiently with high accuracy, and has the potential to be applied to the situations where computing resources are relatively scarce. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Wen Zeng
- School of Biomedical Engineering, Shenzhen campus of Sun Yat-sen University, Shenzhen, Guangdong, 518107, China
| | - Jie Luo
- School of Biomedical Engineering, Shenzhen campus of Sun Yat-sen University, Shenzhen, Guangdong, 518107, China.,Key Laboratory of Sensing Technology and Biomedical Instrument of Guangdong Province, Guangdong Provincial Engineering and Technology Center of Advanced and Portable Medical Devices, Sun Yat-sen University, Guangzhou, China
| | - Jiaru Cheng
- School of Biomedical Engineering, Shenzhen campus of Sun Yat-sen University, Shenzhen, Guangdong, 518107, China
| | - Yiling Lu
- School of Biomedical Engineering, Shenzhen campus of Sun Yat-sen University, Shenzhen, Guangdong, 518107, China
| |
Collapse
|
19
|
Fetal ultrasound image segmentation using dilated multi-scale-LinkNet. Int J Health Sci (Qassim) 2022. [DOI: 10.53730/ijhs.v6ns1.6047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Ultrasound imaging is routinely conducted for prenatal care in many countries to determine the health of the fetus, the pregnancy's progress, as well as the baby's due date. The intrinsic property of fetal images during different stages of pregnancy creates difficulty in automatic extraction of fetal head from ultrasound image data. The proposed work develops a deep learning model called Dilated Multi-scale-LinkNet for segmenting fetal skulls automatically from two dimensional ultrasound image data. The network is modeled to work with Link-Net since it offers better interpretation in biomedicine applications. Convolutional layers with dilations are added following the encoders. The Dilated convolution is used to expand the size of an image to prevent data loss. Training and evaluating the model is done using the HC18 grand challenge dataset. It contains 2D ultrasound images at different pregnancy stages. The results of experiments performed on an ultrasound images of women in different pregnancy stages. It reveals that we achieved 94.82% Dice score, 1.9 mm ADF, 0.72 DF and 2.02 HD when segmenting the fetal skull. Employing Dilated Multi-Scale-LinkNet improves the accuracy as well as all the evaluation parameters of the segmentation compared with the existing methods.
Collapse
|
20
|
Yang C, Liao S, Yang Z, Guo J, Zhang Z, Yang Y, Guo Y, Yin S, Liu C, Kang Y. RDHCformer: Fusing ResDCN and Transformers for Fetal Head Circumference Automatic Measurement in 2D Ultrasound Images. Front Med (Lausanne) 2022; 9:848904. [PMID: 35425784 PMCID: PMC9002127 DOI: 10.3389/fmed.2022.848904] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Accepted: 03/07/2022] [Indexed: 11/13/2022] Open
Abstract
Fetal head circumference (HC) is an important biological parameter to monitor the healthy development of the fetus. Since there are some HC measurement errors that affected by the skill and experience of the sonographers, a rapid, accurate and automatic measurement for fetal HC in prenatal ultrasound is of great significance. We proposed a new one-stage network for rotating elliptic object detection based on anchor-free method, which is also an end-to-end network for fetal HC auto-measurement that no need for any post-processing. The network structure used simple transformer structure combined with convolutional neural network (CNN) for a lightweight design, meanwhile, made full use of powerful global feature extraction ability of transformer and local feature extraction ability of CNN to extract continuous and complete skull edge information. The two complement each other for promoting detection precision of fetal HC without significantly increasing the amount of computation. In order to reduce the large variation of intersection over union (IOU) in rotating elliptic object detection caused by slight angle deviation, we used soft stage-wise regression (SSR) strategy for angle regression and added KLD that is approximate to IOU loss into total loss function. The proposed method achieved good results on the HC18 dataset to prove its effectiveness. This study is expected to help less experienced sonographers, provide help for precision medicine, and relieve the shortage of sonographers for prenatal ultrasound in worldwide.
Collapse
Affiliation(s)
- Chaoran Yang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.,Medical Device Innovation Center, Shenzhen Technology University, Shenzhen, China
| | - Shanshan Liao
- Department of Obstetrics, Shengjing Hospital of China Medical University, Shenyang, China
| | - Zeyu Yang
- Department of Ultrasound, Shengjing Hospital of China Medical University, Shenyang, China
| | - Jiaqi Guo
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Zhichao Zhang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Yingjian Yang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.,Medical Device Innovation Center, Shenzhen Technology University, Shenzhen, China
| | - Yingwei Guo
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.,Medical Device Innovation Center, Shenzhen Technology University, Shenzhen, China
| | - Shaowei Yin
- Department of Obstetrics, Shengjing Hospital of China Medical University, Shenyang, China
| | - Caixia Liu
- Department of Obstetrics, Shengjing Hospital of China Medical University, Shenyang, China
| | - Yan Kang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.,Medical Device Innovation Center, Shenzhen Technology University, Shenzhen, China.,Engineering Research Centre of Medical Imaging and Intelligent Analysis, Ministry of Education, Shenyang, China
| |
Collapse
|
21
|
Wang X, Wang W, Cai X. Automatic measurement of fetal head circumference using a novel GCN-assisted deep convolutional network. Comput Biol Med 2022; 145:105515. [DOI: 10.1016/j.compbiomed.2022.105515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Revised: 03/31/2022] [Accepted: 04/10/2022] [Indexed: 11/03/2022]
|
22
|
Sun Y, Yang H, Zhou J, Wang Y. ISSMF: Integrated semantic and spatial information of multi-level features for automatic segmentation in prenatal ultrasound images. Artif Intell Med 2022; 125:102254. [DOI: 10.1016/j.artmed.2022.102254] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2020] [Revised: 12/27/2021] [Accepted: 02/05/2022] [Indexed: 11/02/2022]
|
23
|
He F, Wang Y, Xiu Y, Zhang Y, Chen L. Artificial Intelligence in Prenatal Ultrasound Diagnosis. Front Med (Lausanne) 2021; 8:729978. [PMID: 34977053 PMCID: PMC8716504 DOI: 10.3389/fmed.2021.729978] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Accepted: 11/29/2021] [Indexed: 12/12/2022] Open
Abstract
The application of artificial intelligence (AI) technology to medical imaging has resulted in great breakthroughs. Given the unique position of ultrasound (US) in prenatal screening, the research on AI in prenatal US has practical significance with its application to prenatal US diagnosis improving work efficiency, providing quantitative assessments, standardizing measurements, improving diagnostic accuracy, and automating image quality control. This review provides an overview of recent studies that have applied AI technology to prenatal US diagnosis and explains the challenges encountered in these applications.
Collapse
Affiliation(s)
| | | | | | | | - Lizhu Chen
- Department of Ultrasound, Shengjing Hospital of China Medical University, Shenyang, China
| |
Collapse
|
24
|
Sahli H, Ben Slama A, Mouelhi A, Soayeh N, Rachdi R, Sayadi M. A computer-aided method based on geometrical texture features for a precocious detection of fetal Hydrocephalus in ultrasound images. Technol Health Care 2021; 28:643-664. [PMID: 32200362 DOI: 10.3233/thc-191752] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUD Hydrocephalus is the most common anomaly of the fetal head characterized by an excessive accumulation of fluid in the brain processing. The diagnostic process of fetal heads using traditional evaluation techniques are generally time consuming and error prone. Usually, fetal head size is computed using an ultrasound (US) image around 20-22 weeks, which is the gestational age (GA). Biometrical measurements are extracted and compared with ground truth charts to identify normal or abnormal growth. METHODS In this paper, an attempt has been made to enhance the Hydrocephalus characterization process by extracting other geometrical and textural features to design an efficient recognition system. The superiority of this work consists of the reduced time processing and the complexity of standard automatic approaches for routine examination. This proposed method requires practical insidiousness of the precocious discovery of fetuses' malformation to alert the experts about the existence of abnormal outcome. The first task is devoted to a proposed pre-processing model using a standard filtering and a segmentation scheme using a modified Hough transform (MHT) to detect the region of interest. Indeed, the obtained clinical parameters are presented to the principal component analysis (PCA) model in order to obtain a reduced number of measures which are employed in the classification stage. RESULTS Thanks to the combination of geometrical and statistical features, the classification process provided an important ability and an interesting performance achieving more than 96% of accuracy to detect pathological subjects in premature ages. CONCLUSIONS The experimental results illustrate the success and the accuracy of the proposed classification method for a factual diagnostic of fetal head malformation.
Collapse
Affiliation(s)
- Hanene Sahli
- University of Tunis, ENSIT, LR13ES03 SIME, Tunis, Tunisia
| | - Amine Ben Slama
- University of Tunis El Manar, ISTMT, LR13ES07, LRBTM, Tunis, Tunisia
| | - Aymen Mouelhi
- University of Tunis, ENSIT, LR13ES03 SIME, Tunis, Tunisia
| | - Nesrine Soayeh
- Obstetrics, Gynecology and Reproductive Department, Military Hospital, Tunis, Tunisia
| | - Radhouane Rachdi
- Obstetrics, Gynecology and Reproductive Department, Military Hospital, Tunis, Tunisia
| | - Mounir Sayadi
- University of Tunis, ENSIT, LR13ES03 SIME, Tunis, Tunisia
| |
Collapse
|
25
|
Ghelich Oghli M, Shabanzadeh A, Moradi S, Sirjani N, Gerami R, Ghaderi P, Sanei Taheri M, Shiri I, Arabi H, Zaidi H. Automatic fetal biometry prediction using a novel deep convolutional network architecture. Phys Med 2021; 88:127-137. [PMID: 34242884 DOI: 10.1016/j.ejmp.2021.06.020] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/06/2021] [Revised: 06/23/2021] [Accepted: 06/27/2021] [Indexed: 12/18/2022] Open
Abstract
PURPOSE Fetal biometric measurements face a number of challenges, including the presence of speckle, limited soft-tissue contrast and difficulties in the presence of low amniotic fluid. This work proposes a convolutional neural network for automatic segmentation and measurement of fetal biometric parameters, including biparietal diameter (BPD), head circumference (HC), abdominal circumference (AC), and femur length (FL) from ultrasound images that relies on the attention gates incorporated into the multi-feature pyramid Unet (MFP-Unet) network. METHODS The proposed approach, referred to as Attention MFP-Unet, learns to extract/detect salient regions automatically to be treated as the object of interest via the attention gates. After determining the type of anatomical structure in the image using a convolutional neural network, Niblack's thresholding technique was applied as pre-processing algorithm for head and abdomen identification, whereas a novel algorithm was used for femur extraction. A publicly-available dataset (HC18 grand-challenge) and clinical data of 1334 subjects were utilized for training and evaluation of the Attention MFP-Unet algorithm. RESULTS Dice similarity coefficient (DSC), hausdorff distance (HD), percentage of good contours, the conformity coefficient, and average perpendicular distance (APD) were employed for quantitative evaluation of fetal anatomy segmentation. In addition, correlation analysis, good contours, and conformity were employed to evaluate the accuracy of the biometry predictions. Attention MFP-Unet achieved 0.98, 1.14 mm, 100%, 0.95, and 0.2 mm for DSC, HD, good contours, conformity, and APD, respectively. CONCLUSIONS Quantitative evaluation demonstrated the superior performance of the Attention MFP-Unet compared to state-of-the-art approaches commonly employed for automatic measurement of fetal biometric parameters.
Collapse
Affiliation(s)
- Mostafa Ghelich Oghli
- Research and Development Department, Med Fanavarn Plus Co., Karaj, Iran; Department of Cardiovascular Sciences, KU Leuven, Leuven, Belgium.
| | - Ali Shabanzadeh
- Research and Development Department, Med Fanavarn Plus Co., Karaj, Iran.
| | - Shakiba Moradi
- Research and Development Department, Med Fanavarn Plus Co., Karaj, Iran
| | - Nasim Sirjani
- Research and Development Department, Med Fanavarn Plus Co., Karaj, Iran
| | - Reza Gerami
- Radiation Sciences Research Center (RSRC), Aja University of Medical Sciences, Tehran, Iran
| | - Payam Ghaderi
- Research and Development Department, Med Fanavarn Plus Co., Karaj, Iran
| | - Morteza Sanei Taheri
- R Department of Radiology, Shohada-e-Tajrish Hospital, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland; Geneva University Neurocenter, Geneva University, CH-1205 Geneva, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
26
|
Zeng Y, Tsui PH, Wu W, Zhou Z, Wu S. Fetal Ultrasound Image Segmentation for Automatic Head Circumference Biometry Using Deeply Supervised Attention-Gated V-Net. J Digit Imaging 2021; 34:134-148. [PMID: 33483862 PMCID: PMC7887128 DOI: 10.1007/s10278-020-00410-5] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Revised: 11/06/2020] [Accepted: 11/20/2020] [Indexed: 10/22/2022] Open
Abstract
Automatic computerized segmentation of fetal head from ultrasound images and head circumference (HC) biometric measurement is still challenging, due to the inherent characteristics of fetal ultrasound images at different semesters of pregnancy. In this paper, we proposed a new deep learning method for automatic fetal ultrasound image segmentation and HC biometry: deeply supervised attention-gated (DAG) V-Net, which incorporated the attention mechanism and deep supervision strategy into V-Net models. In addition, multi-scale loss function was introduced for deep supervision. The training set of the HC18 Challenge was expanded with data augmentation to train the DAG V-Net deep learning models. The trained models were used to automatically segment fetal head from two-dimensional ultrasound images, followed by morphological processing, edge detection, and ellipse fitting. The fitted ellipses were then used for HC biometric measurement. The proposed DAG V-Net method was evaluated on the testing set of HC18 (n = 355), in terms of four performance indices: Dice similarity coefficient (DSC), Hausdorff distance (HD), HC difference (DF), and HC absolute difference (ADF). Experimental results showed that DAG V-Net had a DSC of 97.93%, a DF of 0.09 ± 2.45 mm, an AD of 1.77 ± 1.69 mm, and an HD of 1.29 ± 0.79 mm. The proposed DAG V-Net method ranks fifth among the participants in the HC18 Challenge. By incorporating the attention mechanism and deep supervision, the proposed method yielded better segmentation performance than conventional U-Net and V-Net methods. Compared with published state-of-the-art methods, the proposed DAG V-Net had better or comparable segmentation performance. The proposed DAG V-Net may be used as a new method for fetal ultrasound image segmentation and HC biometry. The code of DAG V-Net will be made available publicly on https://github.com/xiaojinmao-code/ .
Collapse
Affiliation(s)
- Yan Zeng
- Department of Biomedical Engineering, Faculty of Environmental and Life Sciences, Beijing University of Technology, Beijing, China
| | - Po-Hsiang Tsui
- Department of Medical Imaging and Radiological Sciences, College of Medicine, Chang Gung University, Taoyuan, Taiwan
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital, Linkou, Taoyuan, Taiwan
- Medical Imaging Research Center, Institute for Radiological Research, Chang Gung University and Chang Gung Memorial Hospital, Linkou, Taoyuan, Taiwan
| | - Weiwei Wu
- College of Biomedical Engineering, Capital Medical University, Beijing, China
| | - Zhuhuang Zhou
- Department of Biomedical Engineering, Faculty of Environmental and Life Sciences, Beijing University of Technology, Beijing, China.
| | - Shuicai Wu
- Department of Biomedical Engineering, Faculty of Environmental and Life Sciences, Beijing University of Technology, Beijing, China.
| |
Collapse
|
27
|
Sharma H, Drukker L, Chatelain P, Droste R, Papageorghiou AT, Noble JA. Knowledge representation and learning of operator clinical workflow from full-length routine fetal ultrasound scan videos. Med Image Anal 2021; 69:101973. [PMID: 33550004 DOI: 10.1016/j.media.2021.101973] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2020] [Revised: 11/18/2020] [Accepted: 01/11/2021] [Indexed: 12/25/2022]
Abstract
Ultrasound is a widely used imaging modality, yet it is well-known that scanning can be highly operator-dependent and difficult to perform, which limits its wider use in clinical practice. The literature on understanding what makes clinical sonography hard to learn and how sonography varies in the field is sparse, restricted to small-scale studies on the effectiveness of ultrasound training schemes, the role of ultrasound simulation in training, and the effect of introducing scanning guidelines and standards on diagnostic image quality. The Big Data era, and the recent and rapid emergence of machine learning as a more mainstream large-scale data analysis technique, presents a fresh opportunity to study sonography in the field at scale for the first time. Large-scale analysis of video recordings of full-length routine fetal ultrasound scans offers the potential to characterise differences between the scanning proficiency of experts and trainees that would be tedious and time-consuming to do manually due to the vast amounts of data. Such research would be informative to better understand operator clinical workflow when conducting ultrasound scans to support skills training, optimise scan times, and inform building better user-machine interfaces. This paper is to our knowledge the first to address sonography data science, which we consider in the context of second-trimester fetal sonography screening. Specifically, we present a fully-automatic framework to analyse operator clinical workflow solely from full-length routine second-trimester fetal ultrasound scan videos. An ultrasound video dataset containing more than 200 hours of scan recordings was generated for this study. We developed an original deep learning method to temporally segment the ultrasound video into semantically meaningful segments (the video description). The resulting semantic annotation was then used to depict operator clinical workflow (the knowledge representation). Machine learning was applied to the knowledge representation to characterise operator skills and assess operator variability. For video description, our best-performing deep spatio-temporal network shows favourable results in cross-validation (accuracy: 91.7%), statistical analysis (correlation: 0.98, p < 0.05) and retrospective manual validation (accuracy: 76.4%). For knowledge representation of operator clinical workflow, a three-level abstraction scheme consisting of a Subject-specific Timeline Model (STM), Summary of Timeline Features (STF), and an Operator Graph Model (OGM), was introduced that led to a significant decrease in dimensionality and computational complexity compared to raw video data. The workflow representations were learnt to discriminate between operator skills, where a proposed convolutional neural network-based model showed most promising performance (cross-validation accuracy: 98.5%, accuracy on unseen operators: 76.9%). These were further used to derive operator-specific scanning signatures and operator variability in terms of type, order and time distribution of constituent tasks.
Collapse
Affiliation(s)
- Harshita Sharma
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, United Kingdom.
| | - Lior Drukker
- Nuffield Department of Women's and Reproductive Health, University of Oxford, Oxford, United Kingdom
| | - Pierre Chatelain
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, United Kingdom
| | - Richard Droste
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, United Kingdom
| | - Aris T Papageorghiou
- Nuffield Department of Women's and Reproductive Health, University of Oxford, Oxford, United Kingdom
| | - J Alison Noble
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
28
|
Fiorentino MC, Moccia S, Capparuccini M, Giamberini S, Frontoni E. A regression framework to head-circumference delineation from US fetal images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 198:105771. [PMID: 33049451 DOI: 10.1016/j.cmpb.2020.105771] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/10/2020] [Accepted: 09/20/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVES Measuring head-circumference (HC) length from ultrasound (US) images is a crucial clinical task to assess fetus growth. To lower intra- and inter-operator variability in HC length measuring, several computer-assisted solutions have been proposed in the years. Recently, a large number of deep-learning approaches is addressing the problem of HC delineation through the segmentation of the whole fetal head via convolutional neural networks (CNNs). Since the task is a edge-delineation problem, we propose a different strategy based on regression CNNs. METHODS The proposed framework consists of a region-proposal CNN for head localization and centering, and a regression CNN for accurately delineate the HC. The first CNN is trained exploiting transfer learning, while we propose a training strategy for the regression CNN based on distance fields. RESULTS The framework was tested on the HC18 Challenge dataset, which consists of 999 training and 335 testing images. A mean absolute difference of 1.90 ( ± 1.76) mm and a Dice similarity coefficient of 97.75 ( ± 1.32) % were achieved, overcoming approaches in the literature. CONCLUSIONS The experimental results showed the effectiveness of the proposed framework, proving its potential in supporting clinicians during the clinical practice.
Collapse
Affiliation(s)
- Maria Chiara Fiorentino
- Department of Information Engineering, Universita Politecnica delle Marche, Via Brecce Bianche, 12, Ancona 60131, Italy
| | - Sara Moccia
- Department of Information Engineering, Universita Politecnica delle Marche, Via Brecce Bianche, 12, Ancona 60131, Italy; Department of Advanced Robotics, Istituto Italiano di Tecnologia, Via Morego, 30, Genova 16163, Italy.
| | - Morris Capparuccini
- Department of Information Engineering, Universita Politecnica delle Marche, Via Brecce Bianche, 12, Ancona 60131, Italy
| | - Sara Giamberini
- Department of Information Engineering, Universita Politecnica delle Marche, Via Brecce Bianche, 12, Ancona 60131, Italy
| | - Emanuele Frontoni
- Department of Information Engineering, Universita Politecnica delle Marche, Via Brecce Bianche, 12, Ancona 60131, Italy
| |
Collapse
|
29
|
Yang X, Li H, Liu L, Ni D. Scale-aware Auto-context-guided Fetal US Segmentation with Structured Random Forests. BIO INTEGRATION 2020. [DOI: 10.15212/bioi-2020-0016] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Abstract Accurate measurement of fetal biometrics in ultrasound at different trimesters is essential in assisting clinicians to conduct pregnancy diagnosis. However, the accuracy of manual segmentation for measurement is highly user-dependent. Here, we design a general framework
for automatically segmenting fetal anatomical structures in two-dimensional (2D) ultrasound (US) images and thus make objective biometric measurements available. We first introduce structured random forests (SRFs) as the core discriminative predictor to recognize the region of fetal anatomical
structures with a primary classification map. The patch-wise joint labeling presented by SRFs has inherent advantages in identifying an ambiguous/fuzzy boundary and reconstructing incomplete anatomical boundary in US. Then, to get a more accurate and smooth classification map, a scale-aware
auto-context model is injected to enhance the contour details of the classification map from various visual levels. Final segmentation can be obtained from the converged classification map with thresholding. Our framework is validated on two important biometric measurements, which are fetal
head circumference (HC) and abdominal circumference (AC). The final results illustrate that our proposed method outperforms state-of-the-art methods in terms of segmentation accuracy.
Collapse
Affiliation(s)
- Xin Yang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen 518060,
China
| | - Haoming Li
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen
518060, China
| | - Li Liu
- Department of Electronic Engineering, the Chinese University of Hong Kong, Hong Kong, China
| | - Dong Ni
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen 518060,
China
| |
Collapse
|
30
|
Nascimento JC, Carneiro G. One Shot Segmentation: Unifying Rigid Detection and Non-Rigid Segmentation Using Elastic Regularization. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2020; 42:3054-3070. [PMID: 31217094 DOI: 10.1109/tpami.2019.2922959] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
This paper proposes a novel approach for the non-rigid segmentation of deformable objects in image sequences, which is based on one-shot segmentation that unifies rigid detection and non-rigid segmentation using elastic regularization. The domain of application is the segmentation of a visual object that temporally undergoes a rigid transformation (e.g., affine transformation) and a non-rigid transformation (i.e., contour deformation). The majority of segmentation approaches to solve this problem are generally based on two steps that run in sequence: a rigid detection, followed by a non-rigid segmentation. In this paper, we propose a new approach, where both the rigid and non-rigid segmentation are performed in a single shot using a sparse low-dimensional manifold that represents the visual object deformations. Given the multi-modality of these deformations, the manifold partitions the training data into several patches, where each patch provides a segmentation proposal during the inference process. These multiple segmentation proposals are merged using the classification results produced by deep belief networks (DBN) that compute the confidence on each segmentation proposal. Thus, an ensemble of DBN classifiers is used for estimating the final segmentation. Compared to current methods proposed in the field, our proposed approach is advantageous in four aspects: (i) it is a unified framework to produce rigid and non-rigid segmentations; (ii) it uses an ensemble classification process, which can help the segmentation robustness; (iii) it provides a significant reduction in terms of the number of dimensions of the rigid and non-rigid segmentations search spaces, compared to current approaches that divide these two problems; and (iv) this lower dimensionality of the search space can also reduce the need for large annotated training sets to be used for estimating the DBN models. Experiments on the problem of left ventricle endocardial segmentation from ultrasound images, and lip segmentation from frontal facial images using the extended Cohn-Kanade (CK+) database, demonstrate the potential of the methodology through qualitative and quantitative evaluations, and the ability to reduce the search and training complexities without a significant impact on the segmentation accuracy.
Collapse
|
31
|
Amiri M, Brooks R, Rivaz H. Fine-Tuning U-Net for Ultrasound Image Segmentation: Different Layers, Different Outcomes. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2020; 67:2510-2518. [PMID: 32763853 DOI: 10.1109/tuffc.2020.3015081] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
One way of resolving the problem of scarce and expensive data in deep learning for medical applications is using transfer learning and fine-tuning a network which has been trained on a large data set. The common practice in transfer learning is to keep the shallow layers unchanged and to modify deeper layers according to the new data set. This approach may not work when using a U-Net and when moving from a different domain to ultrasound (US) images due to their drastically different appearance. In this study, we investigated the effect of fine-tuning different sets of layers of a pretrained U-Net for US image segmentation. Two different schemes were analyzed, based on two different definitions of shallow and deep layers. We studied simulated US images, as well as two human US data sets. We also included a chest X-ray data set. The results showed that choosing which layers to fine-tune is a critical task. In particular, they demonstrated that fine-tuning the last layers of the network, which is the common practice for classification networks, is often the worst strategy. It may therefore be more appropriate to fine-tune the shallow layers rather than deep layers in US image segmentation when using a U-Net. Shallow layers learn lower level features which are critical in automatic segmentation of medical images. Even when a large US data set is available, we observed that fine-tuning shallow layers is a faster approach compared to fine-tuning the whole network.
Collapse
|
32
|
Deepika P, Suresh R, Pabitha P. Defending Against Child Death: Deep
learning‐based
diagnosis method for abnormal identification of fetus ultrasound Images. Comput Intell 2020. [DOI: 10.1111/coin.12394] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Affiliation(s)
- P. Deepika
- Department of Computer Science and Engineering Rajalakshmi Institute of Technology Chennai India
| | - R.M. Suresh
- Department of Computer Science and Engineering Sri Lakshmi Ammal Engineering College Chennai India
| | - P. Pabitha
- Department of Computer Science and Engineering, MIT Campus Anna University Chennai India
| |
Collapse
|
33
|
Automated measurement network for accurate segmentation and parameter modification in fetal head ultrasound images. Med Biol Eng Comput 2020; 58:2879-2892. [PMID: 32975706 DOI: 10.1007/s11517-020-02242-5] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2019] [Accepted: 07/27/2020] [Indexed: 12/21/2022]
Abstract
Measurement of anatomical structures from ultrasound images requires the expertise of experienced clinicians. Moreover, there are artificial factors that make an automatic measurement complicated. In this paper, we aim to present a novel end-to-end deep learning network to automatically measure the fetal head circumference (HC), biparietal diameter (BPD), and occipitofrontal diameter (OFD) length from 2D ultrasound images. Fully convolutional neural networks (FCNNs) have shown significant improvement in natural image segmentation. Therefore, to overcome the potential difficulties in automated segmentation, we present a novelty FCNN and add a regression branch for predicting OFD and BPD in parallel. In the segmentation branch, a feature pyramid inside our network is built from low-level feature layers for a variety of fetal head in ultrasound images, which is different from traditional feature pyramid building methods. In order to select the most useful scale and reduce scale noise, attention mechanism is taken for the feature's filter. In the regression branch, for the accurate estimation of OFD and BPD length, a new region of interest (ROI) pooling layer is proposed to extract the elliptic feature map. We also evaluate the performance of our method on large dataset: HC18. Our experimental results show that our method can achieve better performance than the existing fetal head measurement methods. Graphical Abstract Deep Neural Network for Fetal Head Measurement.
Collapse
|
34
|
Sree SJ, Vasanthanayaki C. Ultrasound Fetal Image Segmentation Techniques: A Review. Curr Med Imaging 2020; 15:52-60. [PMID: 31964327 DOI: 10.2174/1573405613666170622115527] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2017] [Revised: 04/07/2017] [Accepted: 04/25/2017] [Indexed: 11/22/2022]
Abstract
BACKGROUND This paper reviews segmentation techniques for 2D ultrasound fetal images. Fetal anatomy measurements derived from the segmentation results are used to monitor the growth of the fetus. DISCUSSION The segmentation of fetal ultrasound images is a difficult task due to inherent artifacts and degradation of image quality with gestational age. There are segmentation techniques for particular biological structures such as head, stomach, and femur. The whole fetal segmentation algorithms are only very few. CONCLUSION This paper presents a review of these segmentation techniques and the metrics used to evaluate them are summarized.
Collapse
Affiliation(s)
- S Jayanthi Sree
- Department of ECE, Government College of Technology, Coimbatore, India
| | - C Vasanthanayaki
- Department of ECE, Government College of Technology, Coimbatore, India
| |
Collapse
|
35
|
|
36
|
|
37
|
Sridar P, Kumar A, Quinton A, Kennedy NJ, Nanan R, Kim J. An Automated Framework for Large Scale Retrospective Analysis of Ultrasound Images. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE-JTEHM 2019; 7:1800909. [PMID: 31857918 PMCID: PMC6908460 DOI: 10.1109/jtehm.2019.2952379] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Revised: 10/04/2019] [Accepted: 10/29/2019] [Indexed: 11/07/2022]
Abstract
OBJECTIVE Large scale retrospective analysis of fetal ultrasound (US) data is important in the understanding of the cumulative impact of antenatal factors on offspring's health outcomes. Although the benefits are evident, there is a paucity of research into such large scale studies as it requires tedious and expensive effort in manual processing of large scale data repositories. This study presents an automated framework to facilitate retrospective analysis of large scale US data repositories. METHOD Our framework consists of four modules: (1) an image classifier to distinguish the Brightness (B) -mode images; (2) a fetal image structure identifier to select US images containing user-defined fetal structures of interest (fSOI); (3) a biometry measurement algorithm to measure the fSOIs in the images and, (4) a visual evaluation module to allow clinicians to validate the outcomes. RESULTS We demonstrated our framework using thalamus as the fSOI from a hospital repository of more than 80,000 patients, consisting of 3,816,967 antenatal US files (DICOM objects). Our framework classified 1,869,105 B-mode images and from which 38,786 thalamus images were identified. We selected a random subset of 1290 US files with 558 B-mode (containing 19 thalamus images and the rest being other US data) and evaluated our framework performance. With the evaluation set, B-mode image classification resulted in accuracy, precision, and recall (APR) of 98.67%, 99.75% and 98.57% respectively. For fSOI identification, APR was 93.12%, 97.76% and 80.78% respectively. CONCLUSION We introduced a completely automated approach designed to analyze a large scale data repository to enable retrospective clinical research.
Collapse
Affiliation(s)
- Pradeeba Sridar
- 1School of Computer ScienceThe University of SydneySydneyNSW2006Australia.,3Sydney Medical School NepeanThe University of SydneySydneyNSW2006Australia
| | - Ashnil Kumar
- 1School of Computer ScienceThe University of SydneySydneyNSW2006Australia.,2School of Biomedical EngineeringThe University of SydneySydneyNSW2006Australia
| | - Ann Quinton
- 3Sydney Medical School NepeanThe University of SydneySydneyNSW2006Australia.,4School of Health, Medical and Applied SciencesCentral Queensland UniversitySydneyNSW2000Australia.,5Charles Perkins CentreThe University of SydneySydneyNSW2006Australia
| | | | - Ralph Nanan
- 3Sydney Medical School NepeanThe University of SydneySydneyNSW2006Australia.,5Charles Perkins CentreThe University of SydneySydneyNSW2006Australia
| | - Jinman Kim
- 1School of Computer ScienceThe University of SydneySydneyNSW2006Australia.,3Sydney Medical School NepeanThe University of SydneySydneyNSW2006Australia.,5Charles Perkins CentreThe University of SydneySydneyNSW2006Australia
| |
Collapse
|
38
|
Sinclair M, Baumgartner CF, Matthew J, Bai W, Martinez JC, Li Y, Smith S, Knight CL, Kainz B, Hajnal J, King AP, Rueckert D. Human-level Performance On Automatic Head Biometrics In Fetal Ultrasound Using Fully Convolutional Neural Networks. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2019; 2018:714-717. [PMID: 30440496 DOI: 10.1109/embc.2018.8512278] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Measurement of head biometrics from fetal ultrasonography images is of key importance in monitoring the healthy development of fetuses. However, the accurate measurement of relevant anatomical structures is subject to large inter-observer variability in the clinic. To address this issue, an automated method utilizing Fully Convolutional Networks (FCN) is proposed to determine measurements of fetal head circumference (HC) and biparietal diameter (BPD). An FCN was trained on approximately 2000 2D ultrasound images of the head with annotations provided by 45 different sonographers during routine screening examinations to perform semantic segmentation of the head. An ellipse is fitted to the resulting segmentation contours to mimic the annotation typically produced by a sonographer. The model's performance was compared with inter-observer variability, where two experts manually annotated 100 test images. Mean absolute model-expert error was slightly better than inter-observer error for HC (1.99mm vs 2.16mm), and comparable for BPD (0.61mm vs 0.59mm), as well as Dice coefficient (0.980 vs 0.980). Our results demonstrate that the model performs at a level similar to a human expert, and learns to produce accurate predictions from a large dataset annotated by many sonographers. Additionally, measurements are generated in near real-time at 15fps on a GPU, which could speed up clinical workflow for both skilled and trainee sonographers.
Collapse
|
39
|
|
40
|
Lin Z, Li S, Ni D, Liao Y, Wen H, Du J, Chen S, Wang T, Lei B. Multi-task learning for quality assessment of fetal head ultrasound images. Med Image Anal 2019; 58:101548. [PMID: 31525671 DOI: 10.1016/j.media.2019.101548] [Citation(s) in RCA: 43] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2019] [Revised: 07/15/2019] [Accepted: 08/23/2019] [Indexed: 11/26/2022]
Abstract
It is essential to measure anatomical parameters in prenatal ultrasound images for the growth and development of the fetus, which is highly relied on obtaining a standard plane. However, the acquisition of a standard plane is, in turn, highly subjective and depends on the clinical experience of sonographers. In order to deal with this challenge, we propose a new multi-task learning framework using a faster regional convolutional neural network (MF R-CNN) architecture for standard plane detection and quality assessment. MF R-CNN can identify the critical anatomical structure of the fetal head and analyze whether the magnification of the ultrasound image is appropriate, and then performs quality assessment of ultrasound images based on clinical protocols. Specifically, the first five convolution blocks of the MF R-CNN learn the features shared within the input data, which can be associated with the detection and classification tasks, and then extend to the task-specific output streams. In training, in order to speed up the different convergence of different tasks, we devise a section train method based on transfer learning. In addition, our proposed method also uses prior clinical and statistical knowledge to reduce the false detection rate. By identifying the key anatomical structure and magnification of the ultrasound image, we score the ultrasonic plane of fetal head to judge whether it is a standard image or not. Experimental results on our own-collected dataset show that our method can accurately make a quality assessment of an ultrasound plane within half a second. Our method achieves promising performance compared with state-of-the-art methods, which can improve the examination effectiveness and alleviate the measurement error caused by improper ultrasound scanning.
Collapse
Affiliation(s)
- Zehui Lin
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518060, China
| | - Shengli Li
- Department of Ultrasound, Affiliated Shenzhen Maternal and Child Healthcare Hospital of Nanfang Medical University, 3012 Fuqiang Rd, Shenzhen, 518060, China
| | - Dong Ni
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518060, China
| | - Yimei Liao
- Department of Ultrasound, Affiliated Shenzhen Maternal and Child Healthcare Hospital of Nanfang Medical University, 3012 Fuqiang Rd, Shenzhen, 518060, China
| | - Huaxuan Wen
- Department of Ultrasound, Affiliated Shenzhen Maternal and Child Healthcare Hospital of Nanfang Medical University, 3012 Fuqiang Rd, Shenzhen, 518060, China
| | - Jie Du
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518060, China
| | - Siping Chen
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518060, China
| | - Tianfu Wang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518060, China.
| | - Baiying Lei
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518060, China.
| |
Collapse
|
41
|
Kim HP, Lee SM, Kwon JY, Park Y, Kim KC, Seo JK. Automatic evaluation of fetal head biometry from ultrasound images using machine learning. Physiol Meas 2019; 40:065009. [PMID: 31091515 DOI: 10.1088/1361-6579/ab21ac] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
OBJECTIVE Ultrasound-based fetal biometric measurements, such as head circumference (HC) and biparietal diameter (BPD), are frequently used to evaluate gestational age and diagnose fetal central nervous system pathology. Because manual measurements are operator-dependent and time-consuming, much research is being actively conducted on automated methods. However, the existing automated methods are still not satisfactory in terms of accuracy and reliability, owing to difficulties dealing with various artefacts in ultrasound images. APPROACH Using the proposed method, a labeled dataset containing 102 ultrasound images was used for training, and validation was performed with 70 ultrasound images. MAIN RESULTS A success rate of 91.43% and 100% for HC and BPD estimations, respectively, and an accuracy of 87.14% for the plane acceptance check. SIGNIFICANCE This paper focuses on fetal head biometry and proposes a deep-learning-based method for estimating HC and BPD with a high degree of accuracy and reliability.
Collapse
|
42
|
Antico M, Sasazawa F, Wu L, Jaiprakash A, Roberts J, Crawford R, Pandey AK, Fontanarosa D. Ultrasound guidance in minimally invasive robotic procedures. Med Image Anal 2019; 54:149-167. [DOI: 10.1016/j.media.2019.01.002] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2017] [Revised: 01/01/2019] [Accepted: 01/09/2019] [Indexed: 12/20/2022]
|
43
|
Sridar P, Kumar A, Quinton A, Nanan R, Kim J, Krishnakumar R. Decision Fusion-Based Fetal Ultrasound Image Plane Classification Using Convolutional Neural Networks. ULTRASOUND IN MEDICINE & BIOLOGY 2019; 45:1259-1273. [PMID: 30826153 DOI: 10.1016/j.ultrasmedbio.2018.11.016] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/08/2017] [Revised: 11/26/2018] [Accepted: 11/29/2018] [Indexed: 06/09/2023]
Abstract
Machine learning for ultrasound image analysis and interpretation can be helpful in automated image classification in large-scale retrospective analyses to objectively derive new indicators of abnormal fetal development that are embedded in ultrasound images. Current approaches to automatic classification are limited to the use of either image patches (cropped images) or the global (whole) image. As many fetal organs have similar visual features, cropped images can misclassify certain structures such as the kidneys and abdomen. Also, the whole image does not encode sufficient local information about structures to identify different structures in different locations. Here we propose a method to automatically classify 14 different fetal structures in 2-D fetal ultrasound images by fusing information from both cropped regions of fetal structures and the whole image. Our method trains two feature extractors by fine-tuning pre-trained convolutional neural networks with the whole ultrasound fetal images and the discriminant regions of the fetal structures found in the whole image. The novelty of our method is in integrating the classification decisions made from the global and local features without relying on priors. In addition, our method can use the classification outcome to localize the fetal structures in the image. Our experiments on a data set of 4074 2-D ultrasound images (training: 3109, test: 965) achieved a mean accuracy of 97.05%, mean precision of 76.47% and mean recall of 75.41%. The Cohen κ of 0.72 revealed the highest agreement between the ground truth and the proposed method. The superiority of the proposed method over the other non-fusion-based methods is statistically significant (p < 0.05). We found that our method is capable of predicting images without ultrasound scanner overlays with a mean accuracy of 92%. The proposed method can be leveraged to retrospectively classify any ultrasound images in clinical research.
Collapse
Affiliation(s)
- Pradeeba Sridar
- Department of Engineering Design, Indian Institute of Technology Madras, India; School of Computer Science, University of Sydney, Sydney, New South Wales, Australia
| | - Ashnil Kumar
- School of Computer Science, University of Sydney, Sydney, New South Wales, Australia
| | - Ann Quinton
- Sydney Medical School, University of Sydney, Sydney, New South Wales, Australia
| | - Ralph Nanan
- Sydney Medical School, University of Sydney, Sydney, New South Wales, Australia
| | - Jinman Kim
- School of Computer Science, University of Sydney, Sydney, New South Wales, Australia
| | | |
Collapse
|
44
|
van den Heuvel TLA, Petros H, Santini S, de Korte CL, van Ginneken B. Automated Fetal Head Detection and Circumference Estimation from Free-Hand Ultrasound Sweeps Using Deep Learning in Resource-Limited Countries. ULTRASOUND IN MEDICINE & BIOLOGY 2019; 45:773-785. [PMID: 30573305 DOI: 10.1016/j.ultrasmedbio.2018.09.015] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/27/2018] [Revised: 09/05/2018] [Accepted: 09/14/2018] [Indexed: 06/09/2023]
Abstract
Ultrasound imaging remains out of reach for most pregnant women in developing countries because it requires a trained sonographer to acquire and interpret the images. We address this problem by presenting a system that can automatically estimate the fetal head circumference (HC) from data obtained with use of the obstetric sweep protocol (OSP). The OSP consists of multiple pre-defined sweeps with the ultrasound transducer over the abdomen of the pregnant woman. The OSP can be taught within a day to any health care worker without prior knowledge of ultrasound. An experienced sonographer acquired both the standard plane-to obtain the reference HC-and the OSP from 183 pregnant women in St. Luke's Hospital, Wolisso, Ethiopia. The OSP data, which will most likely not contain the standard plane, was used to automatically estimate HC using two fully convolutional neural networks. First, a VGG-Net-inspired network was trained to automatically detect the frames that contained the fetal head. Second, a U-net-inspired network was trained to automatically measure the HC for all frames in which the first network detected a fetal head. The HC was estimated from these frame measurements, and the curve of Hadlock was used to determine gestational age (GA). The results indicated that most automatically estimated GAs fell within the P2.5-P97.5 interval of the Hadlock curve compared with the GAs obtained from the reference HC, so it is possible to automatically estimate GA from OSP data. Our method therefore has potential application for providing maternal care in resource-constrained countries.
Collapse
Affiliation(s)
- Thomas L A van den Heuvel
- Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, The Netherlands; Medical Ultrasound Imaging Center, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, The Netherlands.
| | - Hezkiel Petros
- St. Luke's Catholic Hospital and College of Nursing and Midwifery, Wolisso, Ethiopia
| | - Stefano Santini
- St. Luke's Catholic Hospital and College of Nursing and Midwifery, Wolisso, Ethiopia
| | - Chris L de Korte
- St. Luke's Catholic Hospital and College of Nursing and Midwifery, Wolisso, Ethiopia; Physics of Fluids Group, MIRA, University of Twente, The Netherlands
| | - Bram van Ginneken
- Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, The Netherlands; Fraunhofer MEVIS, Bremen, Germany
| |
Collapse
|
45
|
Evaluation of an improved tool for non-invasive prediction of neonatal respiratory morbidity based on fully automated fetal lung ultrasound analysis. Sci Rep 2019; 9:1950. [PMID: 30760806 PMCID: PMC6374419 DOI: 10.1038/s41598-019-38576-w] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2018] [Accepted: 01/02/2019] [Indexed: 11/22/2022] Open
Abstract
The objective of this study was to evaluate the performance of a new version of quantusFLM®, a software tool for prediction of neonatal respiratory morbidity (NRM) by ultrasound, which incorporates a fully automated fetal lung delineation based on Deep Learning techniques. A set of 790 fetal lung ultrasound images obtained at 24 + 0–38 + 6 weeks’ gestation was evaluated. Perinatal outcomes and the occurrence of NRM were recorded. quantusFLM® version 3.0 was applied to all images to automatically delineate the fetal lung and predict NRM risk. The test was compared with the same technology but using a manual delineation of the fetal lung, and with a scenario where only gestational age was available. The software predicted NRM with a sensitivity, specificity, and positive and negative predictive value of 71.0%, 94.7%, 67.9%, and 95.4%, respectively, with an accuracy of 91.5%. The accuracy for predicting NRM obtained with the same texture analysis but using a manual delineation of the lung was 90.3%, and using only gestational age was 75.6%. To sum up, automated and non-invasive software predicted NRM with a performance similar to that reported for tests based on amniotic fluid analysis and much greater than that of gestational age alone.
Collapse
|
46
|
Mishra D, Chaudhury S, Sarkar M, Soin AS. Ultrasound Image Segmentation: A Deeply Supervised Network With Attention to Boundaries. IEEE Trans Biomed Eng 2018; 66:1637-1648. [PMID: 30346279 DOI: 10.1109/tbme.2018.2877577] [Citation(s) in RCA: 59] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
OBJECTIVE Segmentation of anatomical structures in ultrasound images requires vast radiological knowledge and experience. Moreover, the manual segmentation often results in subjective variations, therefore, an automatic segmentation is desirable. We aim to develop a fully convolutional neural network (FCNN) with attentional deep supervision for the automatic and accurate segmentation of the ultrasound images. METHOD FCNN/CNNs are used to infer high-level context using low-level image features. In this paper, a sub-problem specific deep supervision of the FCNN is performed. The attention of fine resolution layers is steered to learn object boundary definitions using auxiliary losses, whereas coarse resolution layers are trained to discriminate object regions from the background. Furthermore, a customized scheme for downweighting the auxiliary losses and a trainable fusion layer are introduced. This produces an accurate segmentation and helps in dealing with the broken boundaries, usually found in the ultrasound images. RESULTS The proposed network is first tested for blood vessel segmentation in liver images. It results in F1 score, mean intersection over union, and dice index of 0.83, 0.83, and 0.79, respectively. The best values observed among the existing approaches are produced by U-net as 0.74, 0.81, and 0.75, respectively. The proposed network also results in dice index value of 0.91 in the lumen segmentation experiments on MICCAI 2011 IVUS challenge dataset, which is near to the provided reference value of 0.93. Furthermore, the improvements similar to vessel segmentation experiments are also observed in the experiment performed to segment lesions. CONCLUSION Deep supervision of the network based on the input-output characteristics of the layers results in improvement in overall segmentation accuracy. SIGNIFICANCE Sub-problem specific deep supervision for ultrasound image segmentation is the main contribution of this paper. Currently the network is trained and tested for fixed size inputs. It requires image resizing and limits the performance in small size images.
Collapse
|
47
|
Kim B, Kim KC, Park Y, Kwon JY, Jang J, Seo JK. Machine-learning-based automatic identification of fetal abdominal circumference from ultrasound images. Physiol Meas 2018; 39:105007. [PMID: 30226815 DOI: 10.1088/1361-6579/aae255] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
OBJECTIVE Obstetricians mainly use ultrasound imaging for fetal biometric measurements. However, such measurements are cumbersome. Hence, there is urgent need for automatic biometric estimation. Automated analysis of ultrasound images is complicated owing to the patient-specific, operator-dependent, and machine-specific characteristics of such images. APPROACH This paper proposes a method for the automatic fetal biometry estimation from 2D ultrasound data through several processes consisting of a specially designed convolutional neural network (CNN) and U-Net for each process. These machine learning techniques take clinicians' decisions, anatomical structures, and the characteristics of ultrasound images into account. The proposed method is divided into three steps: initial abdominal circumference (AC) estimation, AC measurement, and plane acceptance checking. MAIN RESULTS A CNN is used to classify ultrasound images (stomach bubble, amniotic fluid, and umbilical vein), and a Hough transform is used to obtain an initial estimate of the AC. These data are applied to other CNNs to estimate the spine position and bone regions. Then, the obtained information is used to determine the final AC. After determining the AC, a U-Net and a classification CNN are used to check whether the image is suitable for AC measurement. Finally, the efficacy of the proposed method is validated by clinical data. SIGNIFICANCE Our method achieved a Dice similarity metric of [Formula: see text] for AC measurement and an accuracy of 87.10% for our acceptance check of the fetal abdominal standard plane.
Collapse
Affiliation(s)
- Bukweon Kim
- Department of Computational Science and Engineering, Yonsei University, Seoul 03722, Republic of Korea
| | | | | | | | | | | |
Collapse
|
48
|
Torrents-Barrena J, Piella G, Masoller N, Gratacós E, Eixarch E, Ceresa M, Ballester MÁG. Segmentation and classification in MRI and US fetal imaging: Recent trends and future prospects. Med Image Anal 2018; 51:61-88. [PMID: 30390513 DOI: 10.1016/j.media.2018.10.003] [Citation(s) in RCA: 43] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2017] [Revised: 10/09/2018] [Accepted: 10/18/2018] [Indexed: 12/19/2022]
Abstract
Fetal imaging is a burgeoning topic. New advancements in both magnetic resonance imaging and (3D) ultrasound currently allow doctors to diagnose fetal structural abnormalities such as those involved in twin-to-twin transfusion syndrome, gestational diabetes mellitus, pulmonary sequestration and hypoplasia, congenital heart disease, diaphragmatic hernia, ventriculomegaly, etc. Considering the continued breakthroughs in utero image analysis and (3D) reconstruction models, it is now possible to gain more insight into the ongoing development of the fetus. Best prenatal diagnosis performances rely on the conscious preparation of the clinicians in terms of fetal anatomy knowledge. Therefore, fetal imaging will likely span and increase its prevalence in the forthcoming years. This review covers state-of-the-art segmentation and classification methodologies for the whole fetus and, more specifically, the fetal brain, lungs, liver, heart and placenta in magnetic resonance imaging and (3D) ultrasound for the first time. Potential applications of the aforementioned methods into clinical settings are also inspected. Finally, improvements in existing approaches as well as most promising avenues to new areas of research are briefly outlined.
Collapse
Affiliation(s)
- Jordina Torrents-Barrena
- BCN MedTech, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain.
| | - Gemma Piella
- BCN MedTech, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain
| | - Narcís Masoller
- BCNatal - Barcelona Center for Maternal-Fetal and Neonatal Medicine (Hospital Clínic and Hospital Sant Joan de Déu), IDIBAPS, University of Barcelona, Barcelona, Spain; Center for Biomedical Research on Rare Diseases (CIBER-ER), Barcelona, Spain
| | - Eduard Gratacós
- BCNatal - Barcelona Center for Maternal-Fetal and Neonatal Medicine (Hospital Clínic and Hospital Sant Joan de Déu), IDIBAPS, University of Barcelona, Barcelona, Spain; Center for Biomedical Research on Rare Diseases (CIBER-ER), Barcelona, Spain
| | - Elisenda Eixarch
- BCNatal - Barcelona Center for Maternal-Fetal and Neonatal Medicine (Hospital Clínic and Hospital Sant Joan de Déu), IDIBAPS, University of Barcelona, Barcelona, Spain; Center for Biomedical Research on Rare Diseases (CIBER-ER), Barcelona, Spain
| | - Mario Ceresa
- BCN MedTech, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain
| | - Miguel Ángel González Ballester
- BCN MedTech, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain; ICREA, Barcelona, Spain
| |
Collapse
|
49
|
van den Heuvel TLA, de Bruijn D, de Korte CL, van Ginneken B. Automated measurement of fetal head circumference using 2D ultrasound images. PLoS One 2018; 13:e0200412. [PMID: 30138319 PMCID: PMC6107118 DOI: 10.1371/journal.pone.0200412] [Citation(s) in RCA: 70] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2018] [Accepted: 06/26/2018] [Indexed: 11/19/2022] Open
Abstract
In this paper we present a computer aided detection (CAD) system for automated measurement of the fetal head circumference (HC) in 2D ultrasound images for all trimesters of the pregnancy. The HC can be used to estimate the gestational age and monitor growth of the fetus. Automated HC assessment could be valuable in developing countries, where there is a severe shortage of trained sonographers. The CAD system consists of two steps: First, Haar-like features were computed from the ultrasound images to train a random forest classifier to locate the fetal skull. Secondly, the HC was extracted using Hough transform, dynamic programming and an ellipse fit. The CAD system was trained on 999 images and validated on an independent test set of 335 images from all trimesters. The test set was manually annotated by an experienced sonographer and a medical researcher. The reference gestational age (GA) was estimated using the crown-rump length measurement (CRL). The mean difference between the reference GA and the GA estimated by the experienced sonographer was 0.8 ± 2.6, -0.0 ± 4.6 and 1.9 ± 11.0 days for the first, second and third trimester, respectively. The mean difference between the reference GA and the GA estimated by the medical researcher was 1.6 ± 2.7, 2.0 ± 4.8 and 3.9 ± 13.7 days. The mean difference between the reference GA and the GA estimated by the CAD system was 0.6 ± 4.3, 0.4 ± 4.7 and 2.5 ± 12.4 days. The results show that the CAD system performs comparable to an experienced sonographer. The presented system shows similar or superior results compared to systems published in literature. This is the first automated system for HC assessment evaluated on a large test set which contained data of all trimesters of the pregnancy.
Collapse
Affiliation(s)
- Thomas L. A. van den Heuvel
- Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands
- Medical Ultrasound Imaging Center, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Dagmar de Bruijn
- Department of Obstetrics and Gynecology, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Chris L. de Korte
- Medical Ultrasound Imaging Center, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Bram van Ginneken
- Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands
- Fraunhofer MEVIS, Bremen, Germany
| |
Collapse
|
50
|
Automated Techniques for the Interpretation of Fetal Abnormalities: A Review. Appl Bionics Biomech 2018; 2018:6452050. [PMID: 29983738 PMCID: PMC6015700 DOI: 10.1155/2018/6452050] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2017] [Revised: 04/07/2018] [Accepted: 05/10/2018] [Indexed: 11/17/2022] Open
Abstract
Ultrasound (US) image segmentation methods, focusing on techniques developed for fetal biometric parameters and nuchal translucency, are briefly reviewed. Ultrasound medical images can easily identify the fetus using segmentation techniques and calculate fetal parameters. It can timely find the fetal abnormality so that necessary action can be taken by the pregnant woman. Firstly, a detailed literature has been offered on fetal biometric parameters and nuchal translucency to highlight the investigation approaches with a degree of validation in diverse clinical domains. Then, a categorization of the bibliographic assessment of recent research effort in the segmentation field of ultrasound 2D fetal images has been presented. The fetal images of high-risk pregnant women have been taken into the routine and continuous monitoring of fetal parameters. These parameters are used for detection of fetal weight, fetal growth, gestational age, and any possible abnormality detection.
Collapse
|