1
|
Hurtado J, Sierra-Franco CA, Motta T, Raposo A. Segmentation of four-chamber view images in fetal ultrasound exams using a novel deep learning model ensemble method. Comput Biol Med 2024; 183:109188. [PMID: 39395344 DOI: 10.1016/j.compbiomed.2024.109188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Revised: 08/28/2024] [Accepted: 09/20/2024] [Indexed: 10/14/2024]
Abstract
Fetal echocardiography, a specialized ultrasound application commonly utilized for fetal heart assessment, can greatly benefit from automated segmentation of anatomical structures, aiding operators in their evaluations. We introduce a novel approach that combines various deep learning models for segmenting key anatomical structures in 2D ultrasound images of the fetal heart. Our ensemble method combines the raw predictions from the selected models, obtaining the optimal set of segmentation components that closely approximate the distribution of the fetal heart, resulting in improved segmentation outcomes. The selection of these components involves sequential and hierarchical geometry filtering, focusing on the analysis of shape and relative distances. Unlike other ensemble strategies that average predictions, our method works as a shape selector, ensuring that the final segmentation aligns more accurately with anatomical expectations. Considering a large private dataset for model training and evaluation, we present both numerical and visual experiments highlighting the advantages of our method in comparison to the segmentations produced by the individual models and a conventional average ensemble. Furthermore, we show some applications where our method proves instrumental in obtaining reliable estimations.
Collapse
Affiliation(s)
- Jan Hurtado
- Tecgraf Institute, Pontifical Catholic University of Rio de Janeiro, Rio de Janeiro, Brazil; Department of Informatics, Pontifical Catholic University of Rio de Janeiro, Rio de Janeiro, Brazil.
| | - Cesar A Sierra-Franco
- Tecgraf Institute, Pontifical Catholic University of Rio de Janeiro, Rio de Janeiro, Brazil.
| | - Thiago Motta
- Tecgraf Institute, Pontifical Catholic University of Rio de Janeiro, Rio de Janeiro, Brazil.
| | - Alberto Raposo
- Tecgraf Institute, Pontifical Catholic University of Rio de Janeiro, Rio de Janeiro, Brazil; Department of Informatics, Pontifical Catholic University of Rio de Janeiro, Rio de Janeiro, Brazil.
| |
Collapse
|
2
|
Chen Z, Lu Y, Long S, Campello VM, Bai J, Lekadir K. Fetal Head and Pubic Symphysis Segmentation in Intrapartum Ultrasound Image Using a Dual-Path Boundary-Guided Residual Network. IEEE J Biomed Health Inform 2024; 28:4648-4659. [PMID: 38739504 DOI: 10.1109/jbhi.2024.3399762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
Accurate segmentation of the fetal head and pubic symphysis in intrapartum ultrasound images and measurement of fetal angle of progression (AoP) are critical to both outcome prediction and complication prevention in delivery. However, due to poor quality of perinatal ultrasound imaging with blurred target boundaries and the relatively small target of the public symphysis, fully automated and accurate segmentation remains challenging. In this paper, we propse a dual-path boundary-guided residual network (DBRN), which is a novel approach to tackle these challenges. The model contains a multi-scale weighted module (MWM) to gather global context information, and enhance the feature response within the target region by weighting the feature map. The model also incorporates an enhanced boundary module (EBM) to obtain more precise boundary information. Furthermore, the model introduces a boundary-guided dual-attention residual module (BDRM) for residual learning. BDRM leverages boundary information as prior knowledge and employs spatial attention to simultaneously focus on background and foreground information, in order to capture concealed details and improve segmentation accuracy. Extensive comparative experiments have been conducted on three datasets. The proposed method achieves average Dice score of 0.908 ±0.05 and average Hausdorff distance of 3.396 ±0.66 mm. Compared with state-of-the-art competitors, the proposed DBRN achieves better results. In addition, the average difference between the automatic measurement of AoPs based on this model and the manual measurement results is 6.157 °, which has good consistency and has broad application prospects in clinical practice.
Collapse
|
3
|
Ramirez Zegarra R, Ghi T. Use of artificial intelligence and deep learning in fetal ultrasound imaging. ULTRASOUND IN OBSTETRICS & GYNECOLOGY : THE OFFICIAL JOURNAL OF THE INTERNATIONAL SOCIETY OF ULTRASOUND IN OBSTETRICS AND GYNECOLOGY 2023; 62:185-194. [PMID: 36436205 DOI: 10.1002/uog.26130] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 11/06/2022] [Accepted: 11/21/2022] [Indexed: 06/16/2023]
Abstract
Deep learning is considered the leading artificial intelligence tool in image analysis in general. Deep-learning algorithms excel at image recognition, which makes them valuable in medical imaging. Obstetric ultrasound has become the gold standard imaging modality for detection and diagnosis of fetal malformations. However, ultrasound relies heavily on the operator's experience, making it unreliable in inexperienced hands. Several studies have proposed the use of deep-learning models as a tool to support sonographers, in an attempt to overcome these problems inherent to ultrasound. Deep learning has many clinical applications in the field of fetal imaging, including identification of normal and abnormal fetal anatomy and measurement of fetal biometry. In this Review, we provide a comprehensive explanation of the fundamentals of deep learning in fetal imaging, with particular focus on its clinical applicability. © 2022 International Society of Ultrasound in Obstetrics and Gynecology.
Collapse
Affiliation(s)
- R Ramirez Zegarra
- Department of Medicine and Surgery, Obstetrics and Gynecology Unit, University of Parma, Parma, Italy
| | - T Ghi
- Department of Medicine and Surgery, Obstetrics and Gynecology Unit, University of Parma, Parma, Italy
| |
Collapse
|
4
|
Horgan R, Nehme L, Abuhamad A. Artificial intelligence in obstetric ultrasound: A scoping review. Prenat Diagn 2023; 43:1176-1219. [PMID: 37503802 DOI: 10.1002/pd.6411] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 06/05/2023] [Accepted: 07/17/2023] [Indexed: 07/29/2023]
Abstract
The objective is to summarize the current use of artificial intelligence (AI) in obstetric ultrasound. PubMed, Cochrane Library, and ClinicalTrials.gov databases were searched using the following keywords "neural networks", OR "artificial intelligence", OR "machine learning", OR "deep learning", AND "obstetrics", OR "obstetrical", OR "fetus", OR "foetus", OR "fetal", OR "foetal", OR "pregnancy", or "pregnant", AND "ultrasound" from inception through May 2022. The search was limited to the English language. Studies were eligible for inclusion if they described the use of AI in obstetric ultrasound. Obstetric ultrasound was defined as the process of obtaining ultrasound images of a fetus, amniotic fluid, or placenta. AI was defined as the use of neural networks, machine learning, or deep learning methods. The authors' search identified a total of 127 papers that fulfilled our inclusion criteria. The current uses of AI in obstetric ultrasound include first trimester pregnancy ultrasound, assessment of placenta, fetal biometry, fetal echocardiography, fetal neurosonography, assessment of fetal anatomy, and other uses including assessment of fetal lung maturity and screening for risk of adverse pregnancy outcomes. AI holds the potential to improve the ultrasound efficiency, pregnancy outcomes in low resource settings, detection of congenital malformations and prediction of adverse pregnancy outcomes.
Collapse
Affiliation(s)
- Rebecca Horgan
- Division of Maternal Fetal Medicine, Department of Obstetrics & Gynecology, Eastern Virginia Medical School, Norfolk, Virginia, USA
| | - Lea Nehme
- Division of Maternal Fetal Medicine, Department of Obstetrics & Gynecology, Eastern Virginia Medical School, Norfolk, Virginia, USA
| | - Alfred Abuhamad
- Division of Maternal Fetal Medicine, Department of Obstetrics & Gynecology, Eastern Virginia Medical School, Norfolk, Virginia, USA
| |
Collapse
|
5
|
Gong Z, Song J, Guo W, Ju R, Zhao D, Tan W, Zhou W, Zhang G. Abdomen tissues segmentation from computed tomography images using deep learning and level set methods. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2022; 19:14074-14085. [PMID: 36654080 DOI: 10.3934/mbe.2022655] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Accurate abdomen tissues segmentation is one of the crucial tasks in radiation therapy planning of related diseases. However, abdomen tissues segmentation (liver, kidney) is difficult because the low contrast between abdomen tissues and their surrounding organs. In this paper, an attention-based deep learning method for automated abdomen tissues segmentation is proposed. In our method, image cropping is first applied to the original images. U-net model with attention mechanism is then constructed to obtain the initial abdomen tissues. Finally, level set evolution which consists of three energy terms is used for optimize the initial abdomen segmentation. The proposed model is evaluated across 470 subsets. For liver segmentation, the mean dice are 96.2 and 95.1% for the FLARE21 datasets and the LiTS datasets, respectively. For kidney segmentation, the mean dice are 96.6 and 95.7% for the FLARE21 datasets and the LiTS datasets, respectively. Experimental evaluation exhibits that the proposed method can obtain better segmentation results than other methods.
Collapse
Affiliation(s)
- Zhaoxuan Gong
- Department of Computer Science and Information Engineering, Shenyang Aerospace University, Shenyang 110136, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110819, China
| | - Jing Song
- Department of Computer Science and Information Engineering, Shenyang Aerospace University, Shenyang 110136, China
| | - Wei Guo
- Department of Computer Science and Information Engineering, Shenyang Aerospace University, Shenyang 110136, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110819, China
| | - Ronghui Ju
- Liaoning provincial people's hospital, Shenyang 110067, China
| | - Dazhe Zhao
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110819, China
| | - Wenjun Tan
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110819, China
| | - Wei Zhou
- Department of Computer Science and Information Engineering, Shenyang Aerospace University, Shenyang 110136, China
| | - Guodong Zhang
- Department of Computer Science and Information Engineering, Shenyang Aerospace University, Shenyang 110136, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110819, China
| |
Collapse
|
6
|
Alzubaidi M, Agus M, Alyafei K, Althelaya KA, Shah U, Abd-Alrazaq AA, Anbar M, Makhlouf M, Househ M. Towards deep observation: A systematic survey on artificial intelligence techniques to monitor fetus via Ultrasound Images. iScience 2022; 25:104713. [PMID: 35856024 PMCID: PMC9287600 DOI: 10.1016/j.isci.2022.104713] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 06/09/2022] [Accepted: 06/28/2022] [Indexed: 11/26/2022] Open
Abstract
Several reviews have been conducted regarding artificial intelligence (AI) techniques to improve pregnancy outcomes. But they are not focusing on ultrasound images. This survey aims to explore how AI can assist with fetal growth monitoring via ultrasound image. We reported our findings using the guidelines for PRISMA. We conducted a comprehensive search of eight bibliographic databases. Out of 1269 studies 107 are included. We found that 2D ultrasound images were more popular (88) than 3D and 4D ultrasound images (19). Classification is the most used method (42), followed by segmentation (31), classification integrated with segmentation (16) and other miscellaneous methods such as object-detection, regression, and reinforcement learning (18). The most common areas that gained traction within the pregnancy domain were the fetus head (43), fetus body (31), fetus heart (13), fetus abdomen (10), and the fetus face (10). This survey will promote the development of improved AI models for fetal clinical applications. Artificial intelligence studies to monitor fetal development via ultrasound images Fetal issues categorized based on four categories — general, head, heart, face, abdomen The most used AI techniques are classification, segmentation, object detection, and RL The research and practical implications are included.
Collapse
|
7
|
De Jesus-Rodriguez HJ, Morgan MA, Sagreiya H. Deep Learning in Kidney Ultrasound: Overview, Frontiers, and Challenges. Adv Chronic Kidney Dis 2021; 28:262-269. [PMID: 34906311 DOI: 10.1053/j.ackd.2021.07.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Revised: 07/06/2021] [Accepted: 07/06/2021] [Indexed: 12/19/2022]
Abstract
Ultrasonography is a practical imaging technique used in numerous health care settings. It is relatively inexpensive, portable, and safe, and it has dynamic capabilities that make it an invaluable tool for a wide variety of diagnostic and interventional studies. Recently, there has been a revolution in medical imaging using artificial intelligence (AI). A particularly potent form of AI is deep learning, in which the computer learns to recognize pixel or written data on its own without the selection of predetermined features, usually through a specific neural network architecture. Neural networks vary in architecture depending on their task, and key design considerations include the number of layers and complexity, data available, technical requirements, and domain knowledge. Deep learning models offer the potential for promising innovations to workflow, image quality, and vision tasks in sonography. However, there are key limitations and challenges in creating reliable and safe AI models for patients and clinicians.
Collapse
|