1
|
Zhao L, Tan G, Wu Q, Pu B, Ren H, Li S, Li K. FARN: Fetal Anatomy Reasoning Network for Detection With Global Context Semantic and Local Topology Relationship. IEEE J Biomed Health Inform 2024; 28:4866-4877. [PMID: 38648141 DOI: 10.1109/jbhi.2024.3392531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/25/2024]
Abstract
Accurate recognition of fetal anatomical structure is a pivotal task in ultrasound (US) image analysis. Sonographers naturally apply anatomical knowledge and clinical expertise to recognizing key anatomical structures in complex US images. However, mainstream object detection approaches usually treat each structure recognition separately, overlooking anatomical correlations between different structures in fetal US planes. In this work, we propose a Fetal Anatomy Reasoning Network (FARN) that incorporates two kinds of relationship forms: a global context semantic block summarized with visual similarity and a local topology relationship block depicting structural pair constraints. Specifically, by designing the Adaptive Relation Graph Reasoning (ARGR) module, anatomical structures are treated as nodes, with two kinds of relationships between nodes modeled as edges. The flexibility of the model is enhanced by constructing the adaptive relationship graph in a data-driven way, enabling adaptation to various data samples without the need for predefined additional constraints. The feature representation is further refined by aggregating the outputs of the ARGR module. Comprehensive experimental results demonstrate that FARN achieves promising performance in detecting 37 anatomical structures across key US planes in tertiary obstetric screening. FARN effectively utilizes key relationships to improve detection performance, demonstrates robustness to small-scale, similar, and indistinct structures, and avoids some detection errors that deviate from anatomical norms. Overall, our study serves as a resource for developing efficient and concise approaches to model inter-anatomy relationships.
Collapse
|
2
|
Luosang G, Wang Z, Liu J, Zeng F, Yi Z, Wang J. Automated Quality Assessment of Medical Images in Echocardiography Using Neural Networks with Adaptive Ranking and Structure-Aware Learning. Int J Neural Syst 2024:2450054. [PMID: 38984421 DOI: 10.1142/s0129065724500540] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/11/2024]
Abstract
The quality of medical images is crucial for accurately diagnosing and treating various diseases. However, current automated methods for assessing image quality are based on neural networks, which often focus solely on pixel distortion and overlook the significance of complex structures within the images. This study introduces a novel neural network model designed explicitly for automated image quality assessment that addresses pixel and semantic distortion. The model introduces an adaptive ranking mechanism enhanced with contrast sensitivity weighting to refine the detection of minor variances in similar images for pixel distortion assessment. More significantly, the model integrates a structure-aware learning module employing graph neural networks. This module is adept at deciphering the intricate relationships between an image's semantic structure and quality. When evaluated on two ultrasound imaging datasets, the proposed method outshines existing leading models in performance. Additionally, it boasts seamless integration into clinical workflows, enabling real-time image quality assessment, crucial for precise disease diagnosis and treatment.
Collapse
Affiliation(s)
- Gadeng Luosang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, P. R. China
- College of Information Science and Technology, Tibet University, Lhasa 850000, P. R. China
| | - Zhihua Wang
- College of Computer Science and Technology, Zhejiang University, Hangzhou 310058, P. R. China
- Anhui Kunlong Kangxin Medical, Technology Company Limited, Anhui 230000, P. R. China
| | - Jian Liu
- Department of Ultrasound, Clinical Medical College, The First Affiliated Hospital of Chengdu Medical College, Chengdu 610599, P. R. China
| | - Fanxin Zeng
- Department of Clinical Research Center, Dazhou Central Hospital, Sichuan 635099, P. R. China
| | - Zhang Yi
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, P. R. China
| | - Jianyong Wang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, P. R. China
| |
Collapse
|
3
|
Liang B, Peng F, Luo D, Zeng Q, Wen H, Zheng B, Zou Z, An L, Wen H, Wen X, Liao Y, Yuan Y, Li S. Automatic segmentation of 15 critical anatomical labels and measurements of cardiac axis and cardiothoracic ratio in fetal four chambers using nnU-NetV2. BMC Med Inform Decis Mak 2024; 24:128. [PMID: 38773456 PMCID: PMC11106923 DOI: 10.1186/s12911-024-02527-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2024] [Accepted: 05/02/2024] [Indexed: 05/23/2024] Open
Abstract
BACKGROUND Accurate segmentation of critical anatomical structures in fetal four-chamber view images is essential for the early detection of congenital heart defects. Current prenatal screening methods rely on manual measurements, which are time-consuming and prone to inter-observer variability. This study develops an AI-based model using the state-of-the-art nnU-NetV2 architecture for automatic segmentation and measurement of key anatomical structures in fetal four-chamber view images. METHODS A dataset, consisting of 1,083 high-quality fetal four-chamber view images, was annotated with 15 critical anatomical labels and divided into training/validation (867 images) and test (216 images) sets. An AI-based model using the nnU-NetV2 architecture was trained on the annotated images and evaluated using the mean Dice coefficient (mDice) and mean intersection over union (mIoU) metrics. The model's performance in automatically computing the cardiac axis (CAx) and cardiothoracic ratio (CTR) was compared with measurements from sonographers with varying levels of experience. RESULTS The AI-based model achieved a mDice coefficient of 87.11% and an mIoU of 77.68% for the segmentation of critical anatomical structures. The model's automated CAx and CTR measurements showed strong agreement with those of experienced sonographers, with respective intraclass correlation coefficients (ICCs) of 0.83 and 0.81. Bland-Altman analysis further confirmed the high agreement between the model and experienced sonographers. CONCLUSION We developed an AI-based model using the nnU-NetV2 architecture for accurate segmentation and automated measurement of critical anatomical structures in fetal four-chamber view images. Our model demonstrated high segmentation accuracy and strong agreement with experienced sonographers in computing clinically relevant parameters. This approach has the potential to improve the efficiency and reliability of prenatal cardiac screening, ultimately contributing to the early detection of congenital heart defects.
Collapse
Affiliation(s)
- Bocheng Liang
- Department of Ultrasound, Shenzhen Maternity&Child Healthcare Hospital, Shenzhen, 518028, China
| | - Fengfeng Peng
- Department of Computer Science and Electronic Engineering, Hunan University, Changsha, 410082, China
| | - Dandan Luo
- Department of Ultrasound, Shenzhen Maternity&Child Healthcare Hospital, Shenzhen, 518028, China
| | - Qing Zeng
- Department of Ultrasound, Shenzhen Maternity&Child Healthcare Hospital, Shenzhen, 518028, China
| | - Huaxuan Wen
- Department of Ultrasound, Shenzhen Maternity&Child Healthcare Hospital, Shenzhen, 518028, China
| | - Bowen Zheng
- Department of Computer Science and Electronic Engineering, Hunan University, Changsha, 410082, China
| | - Zhiying Zou
- Department of Ultrasound, Shenzhen Maternity&Child Healthcare Hospital, Shenzhen, 518028, China
| | - Liting An
- Department of Ultrasound, Shenzhen Maternity&Child Healthcare Hospital, Shenzhen, 518028, China
| | - Huiying Wen
- Department of Ultrasound, Shenzhen Maternity&Child Healthcare Hospital, Shenzhen, 518028, China
| | - Xin Wen
- Department of Ultrasound, Shenzhen Maternity&Child Healthcare Hospital, Shenzhen, 518028, China
| | - Yimei Liao
- Department of Ultrasound, Shenzhen Maternity&Child Healthcare Hospital, Shenzhen, 518028, China
| | - Ying Yuan
- Department of Ultrasound, Shenzhen Maternity&Child Healthcare Hospital, Shenzhen, 518028, China
| | - Shengli Li
- Department of Ultrasound, Shenzhen Maternity&Child Healthcare Hospital, Shenzhen, 518028, China.
| |
Collapse
|
4
|
Sriraam N, Chinta B, Suresh S, Sudharshan S. Ultrasound imaging based recognition of prenatal anomalies: a systematic clinical engineering review. PROGRESS IN BIOMEDICAL ENGINEERING (BRISTOL, ENGLAND) 2024; 6:023002. [PMID: 39655845 DOI: 10.1088/2516-1091/ad3a4b] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Accepted: 04/03/2024] [Indexed: 12/18/2024]
Abstract
For prenatal screening, ultrasound (US) imaging allows for real-time observation of developing fetal anatomy. Understanding normal and aberrant forms through extensive fetal structural assessment enables for early detection and intervention. However, the reliability of anomaly diagnosis varies depending on operator expertise and device limits. First trimester scans in conjunction with circulating biochemical markers are critical in identifying high-risk pregnancies, but they also pose technical challenges. Recent engineering advancements in automated diagnosis, such as artificial intelligence (AI)-based US image processing and multimodal data fusion, are developing to improve screening efficiency, accuracy, and consistency. Still, creating trust in these data-driven solutions is necessary for integration and acceptability in clinical settings. Transparency can be promoted by explainable AI (XAI) technologies that provide visual interpretations and illustrate the underlying diagnostic decision making process. An explanatory framework based on deep learning is suggested to construct charts depicting anomaly screening results from US video feeds. AI modelling can then be applied to these charts to connect defects with probable deformations. Overall, engineering approaches that increase imaging, automation, and interpretability hold enormous promise for altering traditional workflows and expanding diagnostic capabilities for better prenatal care.
Collapse
Affiliation(s)
- Natarajan Sriraam
- Center for Medical Electronics and Computing, Dept of Medical Electronics, Ramaiah Institute of Technology (RIT), Bangalore, India
| | - Babu Chinta
- Center for Medical Electronics and Computing, Dept of Medical Electronics, Ramaiah Institute of Technology (RIT), Bangalore, India
| | | | | |
Collapse
|
5
|
Pu B, Li K, Chen J, Lu Y, Zeng Q, Yang J, Li S. HFSCCD: A Hybrid Neural Network for Fetal Standard Cardiac Cycle Detection in Ultrasound Videos. IEEE J Biomed Health Inform 2024; 28:2943-2954. [PMID: 38412077 DOI: 10.1109/jbhi.2024.3370507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/29/2024]
Abstract
In the fetal cardiac ultrasound examination, standard cardiac cycle (SCC) recognition is the essential foundation for diagnosing congenital heart disease. Previous studies have mostly focused on the detection of adult CCs, which may not be applicable to the fetus. In clinical practice, localization of SCCs needs to recognize end-systole (ES) and end-diastole (ED) frames accurately, ensuring that every frame in the cycle is a standard view. Most existing methods are not based on the detection of key anatomical structures, which may not recognize irrelevant views and background frames, results containing non-standard frames, or even it does not work in clinical practice. We propose an end-to-end hybrid neural network based on an object detector to detect SCCs from fetal ultrasound videos efficiently, which consists of 3 modules, namely Anatomical Structure Detection (ASD), Cardiac Cycle Localization (CCL), and Standard Plane Recognition (SPR). Specifically, ASD uses an object detector to identify 9 key anatomical structures, 3 cardiac motion phases, and the corresponding confidence scores from fetal ultrasound videos. On this basis, we propose a joint probability method in the CCL to learn the cardiac motion cycle based on the 3 cardiac motion phases. In SPR, to reduce the impact of structure detection errors on the accuracy of the standard plane recognition, we use XGBoost algorithm to learn the relation knowledge of the detected anatomical structures. We evaluate our method on the test fetal ultrasound video datasets and clinical examination cases and achieve remarkable results. This study may pave the way for clinical practices.
Collapse
|
6
|
Zhang J, Xiao S, Zhu Y, Zhang Z, Cao H, Xie M, Zhang L. Advances in the Application of Artificial Intelligence in Fetal Echocardiography. J Am Soc Echocardiogr 2024; 37:550-561. [PMID: 38199332 DOI: 10.1016/j.echo.2023.12.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 12/23/2023] [Accepted: 12/23/2023] [Indexed: 01/12/2024]
Abstract
Congenital heart disease is a severe health risk for newborns. Early detection of abnormalities in fetal cardiac structure and function during pregnancy can help patients seek timely diagnostic and therapeutic advice, and early intervention planning can significantly improve fetal survival rates. Echocardiography is one of the most accessible and widely used diagnostic tools in the diagnosis of fetal congenital heart disease. However, traditional fetal echocardiography has limitations due to fetal, maternal, and ultrasound equipment factors and is highly dependent on the skill level of the operator. Artificial intelligence (AI) technology, with its rapid development utilizing advanced computer algorithms, has great potential to empower sonographers in time-saving and accurate diagnosis and to bridge the skill gap in different regions. In recent years, AI-assisted fetal echocardiography has been successfully applied to a wide range of ultrasound diagnoses. This review systematically reviews the applications of AI in the field of fetal echocardiography over the years in terms of image processing, biometrics, and disease diagnosis and provides an outlook for future research.
Collapse
Affiliation(s)
- Junmin Zhang
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Clinical Research Center for Medical Imaging, Hubei Province, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
| | - Sushan Xiao
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Clinical Research Center for Medical Imaging, Hubei Province, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
| | - Ye Zhu
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Clinical Research Center for Medical Imaging, Hubei Province, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
| | - Zisang Zhang
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Clinical Research Center for Medical Imaging, Hubei Province, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
| | - Haiyan Cao
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Clinical Research Center for Medical Imaging, Hubei Province, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
| | - Mingxing Xie
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Clinical Research Center for Medical Imaging, Hubei Province, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
| | - Li Zhang
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Clinical Research Center for Medical Imaging, Hubei Province, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China.
| |
Collapse
|
7
|
Zhao H, Zheng Q, Teng C, Yasrab R, Drukker L, Papageorghiou AT, Noble JA. Memory-based unsupervised video clinical quality assessment with multi-modality data in fetal ultrasound. Med Image Anal 2023; 90:102977. [PMID: 37778101 DOI: 10.1016/j.media.2023.102977] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 08/03/2023] [Accepted: 09/18/2023] [Indexed: 10/03/2023]
Abstract
In obstetric sonography, the quality of acquisition of ultrasound scan video is crucial for accurate (manual or automated) biometric measurement and fetal health assessment. However, the nature of fetal ultrasound involves free-hand probe manipulation and this can make it challenging to capture high-quality videos for fetal biometry, especially for the less-experienced sonographer. Manually checking the quality of acquired videos would be time-consuming, subjective and requires a comprehensive understanding of fetal anatomy. Thus, it would be advantageous to develop an automatic quality assessment method to support video standardization and improve diagnostic accuracy of video-based analysis. In this paper, we propose a general and purely data-driven video-based quality assessment framework which directly learns a distinguishable feature representation from high-quality ultrasound videos alone, without anatomical annotations. Our solution effectively utilizes both spatial and temporal information of ultrasound videos. The spatio-temporal representation is learned by a bi-directional reconstruction between the video space and the feature space, enhanced by a key-query memory module proposed in the feature space. To further improve performance, two additional modalities are introduced in training which are the sonographer gaze and optical flow derived from the video. Two different clinical quality assessment tasks in fetal ultrasound are considered in our experiments, i.e., measurement of the fetal head circumference and cerebellar diameter; in both of these, low-quality videos are detected by the large reconstruction error in the feature space. Extensive experimental evaluation demonstrates the merits of our approach.
Collapse
Affiliation(s)
- He Zhao
- Institute of Biomedical Engineering, University of Oxford, United Kingdom.
| | - Qingqing Zheng
- Guangdong Provincial Key Laboratory of Computer Vision and Virtual Reality, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, China
| | - Clare Teng
- Institute of Biomedical Engineering, University of Oxford, United Kingdom
| | - Robail Yasrab
- Institute of Biomedical Engineering, University of Oxford, United Kingdom
| | - Lior Drukker
- Nuffield Department of Women's and Reproductive Health, University of Oxford, United Kingdom; Department of Obstetrics and Gynecology, Tel-Aviv University, Israel
| | - Aris T Papageorghiou
- Nuffield Department of Women's and Reproductive Health, University of Oxford, United Kingdom
| | - J Alison Noble
- Institute of Biomedical Engineering, University of Oxford, United Kingdom
| |
Collapse
|
8
|
Slimani S, Hounka S, Mahmoudi A, Rehah T, Laoudiyi D, Saadi H, Bouziyane A, Lamrissi A, Jalal M, Bouhya S, Akiki M, Bouyakhf Y, Badaoui B, Radgui A, Mhlanga M, Bouyakhf EH. Fetal biometry and amniotic fluid volume assessment end-to-end automation using Deep Learning. Nat Commun 2023; 14:7047. [PMID: 37923713 PMCID: PMC10624828 DOI: 10.1038/s41467-023-42438-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Accepted: 10/10/2023] [Indexed: 11/06/2023] Open
Abstract
Fetal biometry and amniotic fluid volume assessments are two essential yet repetitive tasks in fetal ultrasound screening scans, aiding in the detection of potentially life-threatening conditions. However, these assessment methods can occasionally yield unreliable results. Advances in deep learning have opened up new avenues for automated measurements in fetal ultrasound, demonstrating human-level performance in various fetal ultrasound tasks. Nevertheless, the majority of these studies are retrospective in silico studies, with a limited number including African patients in their datasets. In this study we developed and prospectively assessed the performance of deep learning models for end-to-end automation of fetal biometry and amniotic fluid volume measurements. These models were trained using a newly constructed database of 172,293 de-identified Moroccan fetal ultrasound images, supplemented with publicly available datasets. the models were then tested on prospectively acquired video clips from 172 pregnant people forming a consecutive series gathered at four healthcare centers in Morocco. Our results demonstrate that the 95% limits of agreement between the models and practitioners for the studied measurements were narrower than the reported intra- and inter-observer variability among expert human sonographers for all the parameters under study. This means that these models could be deployed in clinical conditions, to alleviate time-consuming, repetitive tasks, and make fetal ultrasound more accessible in limited-resource environments.
Collapse
Affiliation(s)
- Saad Slimani
- Deepecho, 10106, Rabat, Morocco.
- Ibn Rochd University Hospital, Hassan II University, 20100, Casablanca, Morocco.
| | - Salaheddine Hounka
- Telecommunications Systems Services and Networks lab (STRS Lab), INPT, 10112, Rabat, Morocco
| | - Abdelhak Mahmoudi
- Deepecho, 10106, Rabat, Morocco
- Ecole Normale Supérieure, LIMIARF, Mohammed V University in Rabat, 4014, Rabat, Morocco
| | | | - Dalal Laoudiyi
- Ibn Rochd University Hospital, Hassan II University, 20100, Casablanca, Morocco
| | - Hanane Saadi
- Mohammed VI University Hospital, 60049, Oujda, Morocco
| | - Amal Bouziyane
- Université Mohammed VI des Sciences de la Santé, Hôpital Universitaire Cheikh Khalifa, 82403, Casablanca, Morocco
| | - Amine Lamrissi
- Ibn Rochd University Hospital, Hassan II University, 20100, Casablanca, Morocco
| | - Mohamed Jalal
- Ibn Rochd University Hospital, Hassan II University, 20100, Casablanca, Morocco
| | - Said Bouhya
- Ibn Rochd University Hospital, Hassan II University, 20100, Casablanca, Morocco
| | | | | | - Bouabid Badaoui
- Laboratory of Biodiversity, Ecology, and Genome, Department of Biology, Faculty of Sciences, Mohammed V University in Rabat, 1014, Rabat, Morocco
- African Sustainable Agriculture Research Institute (ASARI), Mohammed VI Polytechnic University (UM6P), 43150, Laâyoune, Morocco
| | - Amina Radgui
- Telecommunications Systems Services and Networks lab (STRS Lab), INPT, 10112, Rabat, Morocco
| | - Musa Mhlanga
- Radboud Institute for Molecular Life Sciences, Epigenomics & Single Cell Biophysics, 6525 XZ, Nijmegen, the Netherlands
| | | |
Collapse
|
9
|
Yousefpour Shahrivar R, Karami F, Karami E. Enhancing Fetal Anomaly Detection in Ultrasonography Images: A Review of Machine Learning-Based Approaches. Biomimetics (Basel) 2023; 8:519. [PMID: 37999160 PMCID: PMC10669151 DOI: 10.3390/biomimetics8070519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Revised: 10/05/2023] [Accepted: 10/26/2023] [Indexed: 11/25/2023] Open
Abstract
Fetal development is a critical phase in prenatal care, demanding the timely identification of anomalies in ultrasound images to safeguard the well-being of both the unborn child and the mother. Medical imaging has played a pivotal role in detecting fetal abnormalities and malformations. However, despite significant advances in ultrasound technology, the accurate identification of irregularities in prenatal images continues to pose considerable challenges, often necessitating substantial time and expertise from medical professionals. In this review, we go through recent developments in machine learning (ML) methods applied to fetal ultrasound images. Specifically, we focus on a range of ML algorithms employed in the context of fetal ultrasound, encompassing tasks such as image classification, object recognition, and segmentation. We highlight how these innovative approaches can enhance ultrasound-based fetal anomaly detection and provide insights for future research and clinical implementations. Furthermore, we emphasize the need for further research in this domain where future investigations can contribute to more effective ultrasound-based fetal anomaly detection.
Collapse
Affiliation(s)
- Ramin Yousefpour Shahrivar
- Department of Biology, College of Convergent Sciences and Technologies, Science and Research Branch, Islamic Azad University, Tehran, 14515-775, Iran
| | - Fatemeh Karami
- Department of Medical Genetics, Applied Biophotonics Research Center, Science and Research Branch, Islamic Azad University, Tehran, 14515-775, Iran
| | - Ebrahim Karami
- Department of Engineering and Applied Sciences, Memorial University of Newfoundland, St. John’s, NL A1B 3X5, Canada
| |
Collapse
|
10
|
Jost E, Kosian P, Jimenez Cruz J, Albarqouni S, Gembruch U, Strizek B, Recker F. Evolving the Era of 5D Ultrasound? A Systematic Literature Review on the Applications for Artificial Intelligence Ultrasound Imaging in Obstetrics and Gynecology. J Clin Med 2023; 12:6833. [PMID: 37959298 PMCID: PMC10649694 DOI: 10.3390/jcm12216833] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 10/17/2023] [Accepted: 10/25/2023] [Indexed: 11/15/2023] Open
Abstract
Artificial intelligence (AI) has gained prominence in medical imaging, particularly in obstetrics and gynecology (OB/GYN), where ultrasound (US) is the preferred method. It is considered cost effective and easily accessible but is time consuming and hindered by the need for specialized training. To overcome these limitations, AI models have been proposed for automated plane acquisition, anatomical measurements, and pathology detection. This study aims to overview recent literature on AI applications in OB/GYN US imaging, highlighting their benefits and limitations. For the methodology, a systematic literature search was performed in the PubMed and Cochrane Library databases. Matching abstracts were screened based on the PICOS (Participants, Intervention or Exposure, Comparison, Outcome, Study type) scheme. Articles with full text copies were distributed to the sections of OB/GYN and their research topics. As a result, this review includes 189 articles published from 1994 to 2023. Among these, 148 focus on obstetrics and 41 on gynecology. AI-assisted US applications span fetal biometry, echocardiography, or neurosonography, as well as the identification of adnexal and breast masses, and assessment of the endometrium and pelvic floor. To conclude, the applications for AI-assisted US in OB/GYN are abundant, especially in the subspecialty of obstetrics. However, while most studies focus on common application fields such as fetal biometry, this review outlines emerging and still experimental fields to promote further research.
Collapse
Affiliation(s)
- Elena Jost
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Philipp Kosian
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Jorge Jimenez Cruz
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Shadi Albarqouni
- Department of Diagnostic and Interventional Radiology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
- Helmholtz AI, Helmholtz Munich, Ingolstädter Landstraße 1, 85764 Neuherberg, Germany
| | - Ulrich Gembruch
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Brigitte Strizek
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Florian Recker
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| |
Collapse
|
11
|
Ghabri H, Alqahtani MS, Ben Othman S, Al-Rasheed A, Abbas M, Almubarak HA, Sakli H, Abdelkarim MN. Transfer learning for accurate fetal organ classification from ultrasound images: a potential tool for maternal healthcare providers. Sci Rep 2023; 13:17904. [PMID: 37863944 PMCID: PMC10589237 DOI: 10.1038/s41598-023-44689-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Accepted: 10/11/2023] [Indexed: 10/22/2023] Open
Abstract
Ultrasound imaging is commonly used to aid in fetal development. It has the advantage of being real-time, low-cost, non-invasive, and easy to use. However, fetal organ detection is a challenging task for obstetricians, it depends on several factors, such as the position of the fetus, the habitus of the mother, and the imaging technique. In addition, image interpretation must be performed by a trained healthcare professional who can take into account all relevant clinical factors. Artificial intelligence is playing an increasingly important role in medical imaging and can help solve many of the challenges associated with fetal organ classification. In this paper, we propose a deep-learning model for automating fetal organ classification from ultrasound images. We trained and tested the model on a dataset of fetal ultrasound images, including two datasets from different regions, and recorded them with different machines to ensure the effective detection of fetal organs. We performed a training process on a labeled dataset with annotations for fetal organs such as the brain, abdomen, femur, and thorax, as well as the maternal cervical part. The model was trained to detect these organs from fetal ultrasound images using a deep convolutional neural network architecture. Following the training process, the model, DenseNet169, was assessed on a separate test dataset. The results were promising, with an accuracy of 99.84%, which is an impressive result. The F1 score was 99.84% and the AUC was 98.95%. Our study showed that the proposed model outperformed traditional methods that relied on the manual interpretation of ultrasound images by experienced clinicians. In addition, it also outperformed other deep learning-based methods that used different network architectures and training strategies. This study may contribute to the development of more accessible and effective maternal health services around the world and improve the health status of mothers and their newborns worldwide.
Collapse
Affiliation(s)
- Haifa Ghabri
- MACS Laboratory, National Engineering School of Gabes, University of Gabes, 6029, Gabès, Tunisia
| | - Mohammed S Alqahtani
- Radiological Sciences Department, College of Applied Medical Sciences, King Khalid University, 61421, Abha, Saudi Arabia
- BioImaging Unit, Space Research Centre, Michael Atiyah Building, University of Leicester, Leicester, LE17RH, UK
| | - Soufiene Ben Othman
- PRINCE Laboratory Research, ISITcom, Hammam Sousse, University of Sousse, Sousse, Tunisia.
| | - Amal Al-Rasheed
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, 11671, Riyadh, Saudi Arabia
| | - Mohamed Abbas
- Electrical Engineering Department, College of Engineering, King Khalid University, 61421, Abha, Saudi Arabia
| | - Hassan Ali Almubarak
- Division of Radiology, Department of Medicine, College of Medicine and Surgery, King Khalid University (KKU), Abha, Aseer, Saudi Arabia
| | - Hedi Sakli
- EITA Consulting, 5 Rue Du Chant des Oiseaux, 78360, Montesson, Montesson, France
| | | |
Collapse
|
12
|
Ramirez Zegarra R, Ghi T. Use of artificial intelligence and deep learning in fetal ultrasound imaging. ULTRASOUND IN OBSTETRICS & GYNECOLOGY : THE OFFICIAL JOURNAL OF THE INTERNATIONAL SOCIETY OF ULTRASOUND IN OBSTETRICS AND GYNECOLOGY 2023; 62:185-194. [PMID: 36436205 DOI: 10.1002/uog.26130] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 11/06/2022] [Accepted: 11/21/2022] [Indexed: 06/16/2023]
Abstract
Deep learning is considered the leading artificial intelligence tool in image analysis in general. Deep-learning algorithms excel at image recognition, which makes them valuable in medical imaging. Obstetric ultrasound has become the gold standard imaging modality for detection and diagnosis of fetal malformations. However, ultrasound relies heavily on the operator's experience, making it unreliable in inexperienced hands. Several studies have proposed the use of deep-learning models as a tool to support sonographers, in an attempt to overcome these problems inherent to ultrasound. Deep learning has many clinical applications in the field of fetal imaging, including identification of normal and abnormal fetal anatomy and measurement of fetal biometry. In this Review, we provide a comprehensive explanation of the fundamentals of deep learning in fetal imaging, with particular focus on its clinical applicability. © 2022 International Society of Ultrasound in Obstetrics and Gynecology.
Collapse
Affiliation(s)
- R Ramirez Zegarra
- Department of Medicine and Surgery, Obstetrics and Gynecology Unit, University of Parma, Parma, Italy
| | - T Ghi
- Department of Medicine and Surgery, Obstetrics and Gynecology Unit, University of Parma, Parma, Italy
| |
Collapse
|
13
|
Tenajas R, Miraut D, Illana CI, Alonso-Gonzalez R, Arias-Valcayo F, Herraiz JL. Recent Advances in Artificial Intelligence-Assisted Ultrasound Scanning. APPLIED SCIENCES 2023; 13:3693. [DOI: 10.3390/app13063693] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/09/2024]
Abstract
Ultrasound (US) is a flexible imaging modality used globally as a first-line medical exam procedure in many different clinical cases. It benefits from the continued evolution of ultrasonic technologies and a well-established US-based digital health system. Nevertheless, its diagnostic performance still presents challenges due to the inherent characteristics of US imaging, such as manual operation and significant operator dependence. Artificial intelligence (AI) has proven to recognize complicated scan patterns and provide quantitative assessments for imaging data. Therefore, AI technology has the potential to help physicians get more accurate and repeatable outcomes in the US. In this article, we review the recent advances in AI-assisted US scanning. We have identified the main areas where AI is being used to facilitate US scanning, such as standard plane recognition and organ identification, the extraction of standard clinical planes from 3D US volumes, and the scanning guidance of US acquisitions performed by humans or robots. In general, the lack of standardization and reference datasets in this field makes it difficult to perform comparative studies among the different proposed methods. More open-access repositories of large US datasets with detailed information about the acquisition are needed to facilitate the development of this very active research field, which is expected to have a very positive impact on US imaging.
Collapse
Affiliation(s)
- Rebeca Tenajas
- Family Medicine Department, Centro de Salud de Arroyomolinos, Arroyomolinos, 28939 Madrid, Spain
| | - David Miraut
- Advanced Health Technology Department, GMV, Tres Cantos, 28760 Madrid, Spain
| | - Carlos I. Illana
- Advanced Health Technology Department, GMV, Tres Cantos, 28760 Madrid, Spain
| | | | - Fernando Arias-Valcayo
- Nuclear Physics Group and IPARCOS, Faculty of Physical Sciences, University Complutense of Madrid, CEI Moncloa, 28040 Madrid, Spain
| | - Joaquin L. Herraiz
- Nuclear Physics Group and IPARCOS, Faculty of Physical Sciences, University Complutense of Madrid, CEI Moncloa, 28040 Madrid, Spain
- Health Research Institute of the Hospital Clínico San Carlos (IdISSC), 28040 Madrid, Spain
| |
Collapse
|
14
|
Balagalla UB, Jayasooriya J, de Alwis C, Subasinghe A. Automated segmentation of standard scanning planes to measure biometric parameters in foetal ultrasound images – a survey. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2023. [DOI: 10.1080/21681163.2023.2179343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
Affiliation(s)
- U. B. Balagalla
- Department of Electrical and Electronic Engineering, University of Sri Jayewardenepura, Nugegoda, Sri Lanka
| | - J.V.D. Jayasooriya
- Department of Electrical and Electronic Engineering, University of Sri Jayewardenepura, Nugegoda, Sri Lanka
| | - C. de Alwis
- Department of Electrical and Electronic Engineering, University of Sri Jayewardenepura, Nugegoda, Sri Lanka
| | - A. Subasinghe
- Department of Electrical and Electronic Engineering, University of Sri Jayewardenepura, Nugegoda, Sri Lanka
| |
Collapse
|
15
|
Bardi F, Bakker M, Elvan-Taşpınar A, Kenkhuis MJA, Fridrichs J, Bakker MK, Birnie E, Bilardo CM. Organ-specific learning curves of sonographers performing first-trimester anatomical screening and impact of score-based evaluation on ultrasound image quality. PLoS One 2023; 18:e0279770. [PMID: 36730474 PMCID: PMC9894388 DOI: 10.1371/journal.pone.0279770] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Accepted: 12/13/2022] [Indexed: 02/04/2023] Open
Abstract
INTRODUCTION First-trimester anatomical screening (FTAS) by ultrasound has been introduced in many countries as screening for aneuploidies, but also as early screening for fetal structural abnormalities. While a lot of emphasis has been put on the detection rates of FTAS, little is known about the performance of quality control programs and the sonographers' learning curve for FTAS. The aims of the study were to evaluate the performance of a score-based quality control system for the FTAS and to assess the learning curves of sonographers by evaluating the images of the anatomical planes that were part of the FTAS protocol. METHODS Between 2012-2015, pregnant women opting for the combined test in the North-Netherlands were also invited to participate in a prospective cohort study extending the ultrasound investigation to include a first-trimester ultrasound performed according to a protocol. All anatomical planes included in the protocol were documented by pictures stored for each examination in logbooks. The logbooks of six sonographers were independently assessed by two fetal medicine experts. For each sonographer, logbooks of examination 25-50-75 and 100 plus four additional randomly selected logbooks were scored for correct visualization of 12 organ-system planes. A plane specific score of at least 70% was considered sufficient. The intra-class correlation coefficient (ICC), was used to measure inter-assessor agreement for the cut-off scores. Organ-specific learning curves were defined by single-cumulative sum (CUSUM) analysis. RESULTS Sixty-four logbooks were assessed. Mean duration of the scan was 22 ± 6 minutes and mean gestational age was 12+6 weeks. In total 57% of the logbooks graded as sufficient. Most sufficient scores were obtained for the fetal skull (88%) and brain (70%), while the lowest scores were for the face (29%) and spine (38%). Five sonographers showed a learning curve for the skull and the stomach, four for the brain and limbs, three for the bladder and kidneys, two for the diaphragm and abdominal wall and one for the heart and spine and none for the face and neck. CONCLUSION Learning curves for FTAS differ per organ system and per sonographer. Although score-based evaluation can validly assess image quality, more dynamic approaches may better reflect clinical performance.
Collapse
Affiliation(s)
- Francesca Bardi
- Department of Obstetrics and Gynecology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- * E-mail: (FB); (CMB)
| | - Merel Bakker
- Department of Obstetrics and Gynecology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Ayten Elvan-Taşpınar
- Department of Obstetrics and Gynecology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Monique J. A. Kenkhuis
- Department of Obstetrics and Gynecology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Jeske Fridrichs
- Department of Obstetrics and Gynecology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Marian K. Bakker
- Department of Obstetrics and Gynecology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Erwin Birnie
- Department of Obstetrics and Gynecology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- Department of Genetics, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Caterina M. Bilardo
- Department of Obstetrics and Gynecology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- Department of Obstetrics and Gynecology, Amsterdam University Medical Centers, Amsterdam, The Netherlands
- * E-mail: (FB); (CMB)
| |
Collapse
|
16
|
Fiorentino MC, Villani FP, Di Cosmo M, Frontoni E, Moccia S. A review on deep-learning algorithms for fetal ultrasound-image analysis. Med Image Anal 2023; 83:102629. [PMID: 36308861 DOI: 10.1016/j.media.2022.102629] [Citation(s) in RCA: 38] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 07/12/2022] [Accepted: 09/10/2022] [Indexed: 11/07/2022]
Abstract
Deep-learning (DL) algorithms are becoming the standard for processing ultrasound (US) fetal images. A number of survey papers in the field is today available, but most of them are focusing on a broader area of medical-image analysis or not covering all fetal US DL applications. This paper surveys the most recent work in the field, with a total of 153 research papers published after 2017. Papers are analyzed and commented from both the methodology and the application perspective. We categorized the papers into (i) fetal standard-plane detection, (ii) anatomical structure analysis and (iii) biometry parameter estimation. For each category, main limitations and open issues are presented. Summary tables are included to facilitate the comparison among the different approaches. In addition, emerging applications are also outlined. Publicly-available datasets and performance metrics commonly used to assess algorithm performance are summarized, too. This paper ends with a critical summary of the current state of the art on DL algorithms for fetal US image analysis and a discussion on current challenges that have to be tackled by researchers working in the field to translate the research methodology into actual clinical practice.
Collapse
Affiliation(s)
| | | | - Mariachiara Di Cosmo
- Department of Information Engineering, Università Politecnica delle Marche, Italy
| | - Emanuele Frontoni
- Department of Information Engineering, Università Politecnica delle Marche, Italy; Department of Political Sciences, Communication and International Relations, Università degli Studi di Macerata, Italy
| | - Sara Moccia
- The BioRobotics Institute and Department of Excellence in Robotics & AI, Scuola Superiore Sant'Anna, Italy
| |
Collapse
|
17
|
Imaging fetal anatomy. Semin Cell Dev Biol 2022; 131:78-92. [PMID: 35282997 DOI: 10.1016/j.semcdb.2022.02.023] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Revised: 02/23/2022] [Accepted: 02/23/2022] [Indexed: 02/07/2023]
Abstract
Due to advancements in ultrasound techniques, the focus of antenatal ultrasound screening is moving towards the first trimester of pregnancy. The early first trimester however remains in part, a 'black box', due to the size of the developing embryo and the limitations of contemporary scanning techniques. Therefore there is a need for images of early anatomical developmental to improve our understanding of this area. By using new imaging techniques, we can not only obtain better images to further our knowledge of early embryonic development, but clear images of embryonic and fetal development can also be used in training for e.g. sonographers and fetal surgeons, or to educate parents expecting a child with a fetal anomaly. The aim of this review is to provide an overview of the past, present and future techniques used to capture images of the developing human embryo and fetus and provide the reader newest insights in upcoming and promising imaging techniques. The reader is taken from the earliest drawings of da Vinci, along the advancements in the fields of in utero ultrasound and MR imaging techniques towards high-resolution ex utero imaging using Micro-CT and ultra-high field MRI. Finally, a future perspective is given about the use of artificial intelligence in ultrasound and new potential imaging techniques such as synchrotron radiation-based CT to increase our knowledge regarding human development.
Collapse
|
18
|
Song Y, Zhong Z, Zhao B, Zhang P, Wang Q, Wang Z, Yao L, Lv F, Hu Y. Medical Ultrasound Image Quality Assessment for Autonomous Robotic Screening. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3170209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Yuxin Song
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Zhaoming Zhong
- The Second School of Clinical Medicine, Southern Medical University, Guangzhou, China
| | - Baoliang Zhao
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Peng Zhang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qiong Wang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Ziwen Wang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Liang Yao
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Faqin Lv
- Department of Ultrasound, The Third Medical Centre of Chinese PLA General Hospital, Beijing, China
| | - Ying Hu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
19
|
Sakai A, Komatsu M, Komatsu R, Matsuoka R, Yasutomi S, Dozen A, Shozu K, Arakaki T, Machino H, Asada K, Kaneko S, Sekizawa A, Hamamoto R. Medical Professional Enhancement Using Explainable Artificial Intelligence in Fetal Cardiac Ultrasound Screening. Biomedicines 2022; 10:551. [PMID: 35327353 PMCID: PMC8945208 DOI: 10.3390/biomedicines10030551] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 02/18/2022] [Accepted: 02/21/2022] [Indexed: 12/10/2022] Open
Abstract
Diagnostic support tools based on artificial intelligence (AI) have exhibited high performance in various medical fields. However, their clinical application remains challenging because of the lack of explanatory power in AI decisions (black box problem), making it difficult to build trust with medical professionals. Nevertheless, visualizing the internal representation of deep neural networks will increase explanatory power and improve the confidence of medical professionals in AI decisions. We propose a novel deep learning-based explainable representation "graph chart diagram" to support fetal cardiac ultrasound screening, which has low detection rates of congenital heart diseases due to the difficulty in mastering the technique. Screening performance improves using this representation from 0.966 to 0.975 for experts, 0.829 to 0.890 for fellows, and 0.616 to 0.748 for residents in the arithmetic mean of area under the curve of a receiver operating characteristic curve. This is the first demonstration wherein examiners used deep learning-based explainable representation to improve the performance of fetal cardiac ultrasound screening, highlighting the potential of explainable AI to augment examiner capabilities.
Collapse
Affiliation(s)
- Akira Sakai
- Artificial Intelligence Laboratory, Research Unit, Fujitsu Research, Fujitsu Ltd., 4-1-1 Kamikodanaka, Nakahara-ku, Kawasaki 211-8588, Japan; (A.S.); (S.Y.)
- RIKEN AIP-Fujitsu Collaboration Center, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (R.K.); (R.M.)
- Department of NCC Cancer Science, Biomedical Science and Engineering Track, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.); (H.M.); (K.A.); (S.K.)
| | - Masaaki Komatsu
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Reina Komatsu
- RIKEN AIP-Fujitsu Collaboration Center, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (R.K.); (R.M.)
- Department of Obstetrics and Gynecology, School of Medicine, Showa University, 1-5-8 Hatanodai, Shinagawa-ku, Tokyo 142-8666, Japan; (T.A.); (A.S.)
| | - Ryu Matsuoka
- RIKEN AIP-Fujitsu Collaboration Center, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (R.K.); (R.M.)
- Department of Obstetrics and Gynecology, School of Medicine, Showa University, 1-5-8 Hatanodai, Shinagawa-ku, Tokyo 142-8666, Japan; (T.A.); (A.S.)
| | - Suguru Yasutomi
- Artificial Intelligence Laboratory, Research Unit, Fujitsu Research, Fujitsu Ltd., 4-1-1 Kamikodanaka, Nakahara-ku, Kawasaki 211-8588, Japan; (A.S.); (S.Y.)
- RIKEN AIP-Fujitsu Collaboration Center, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (R.K.); (R.M.)
| | - Ai Dozen
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.); (H.M.); (K.A.); (S.K.)
| | - Kanto Shozu
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.); (H.M.); (K.A.); (S.K.)
| | - Tatsuya Arakaki
- Department of Obstetrics and Gynecology, School of Medicine, Showa University, 1-5-8 Hatanodai, Shinagawa-ku, Tokyo 142-8666, Japan; (T.A.); (A.S.)
| | - Hidenori Machino
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.); (H.M.); (K.A.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Ken Asada
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.); (H.M.); (K.A.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Syuzo Kaneko
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.); (H.M.); (K.A.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Akihiko Sekizawa
- Department of Obstetrics and Gynecology, School of Medicine, Showa University, 1-5-8 Hatanodai, Shinagawa-ku, Tokyo 142-8666, Japan; (T.A.); (A.S.)
| | - Ryuji Hamamoto
- Department of NCC Cancer Science, Biomedical Science and Engineering Track, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.); (H.M.); (K.A.); (S.K.)
| |
Collapse
|
20
|
Nurmaini S, Rachmatullah MN, Sapitri AI, Darmawahyuni A, Tutuko B, Firdaus F, Partan RU, Bernolian N. Deep Learning-Based Computer-Aided Fetal Echocardiography: Application to Heart Standard View Segmentation for Congenital Heart Defects Detection. SENSORS (BASEL, SWITZERLAND) 2021; 21:8007. [PMID: 34884008 PMCID: PMC8659935 DOI: 10.3390/s21238007] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 11/28/2021] [Accepted: 11/29/2021] [Indexed: 12/02/2022]
Abstract
Accurate segmentation of fetal heart in echocardiography images is essential for detecting the structural abnormalities such as congenital heart defects (CHDs). Due to the wide variations attributed to different factors, such as maternal obesity, abdominal scars, amniotic fluid volume, and great vessel connections, this process is still a challenging problem. CHDs detection with expertise in general are substandard; the accuracy of measurements remains highly dependent on humans' training, skills, and experience. To make such a process automatic, this study proposes deep learning-based computer-aided fetal heart echocardiography examinations with an instance segmentation approach, which inherently segments the four standard heart views and detects the defect simultaneously. We conducted several experiments with 1149 fetal heart images for predicting 24 objects, including four shapes of fetal heart standard views, 17 objects of heart-chambers in each view, and three cases of congenital heart defect. The result showed that the proposed model performed satisfactory performance for standard views segmentation, with a 79.97% intersection over union and 89.70% Dice coefficient similarity. It also performed well in the CHDs detection, with mean average precision around 98.30% for intra-patient variation and 82.42% for inter-patient variation. We believe that automatic segmentation and detection techniques could make an important contribution toward improving congenital heart disease diagnosis rates.
Collapse
Affiliation(s)
- Siti Nurmaini
- Intelligent System Research Group, Faculty of Computer Science, Universitas Sriwijaya, Palembang 30139, Indonesia; (M.N.R.); (A.I.S.); (A.D.); (B.T.) (F.F.)
| | - Muhammad Naufal Rachmatullah
- Intelligent System Research Group, Faculty of Computer Science, Universitas Sriwijaya, Palembang 30139, Indonesia; (M.N.R.); (A.I.S.); (A.D.); (B.T.) (F.F.)
| | - Ade Iriani Sapitri
- Intelligent System Research Group, Faculty of Computer Science, Universitas Sriwijaya, Palembang 30139, Indonesia; (M.N.R.); (A.I.S.); (A.D.); (B.T.) (F.F.)
| | - Annisa Darmawahyuni
- Intelligent System Research Group, Faculty of Computer Science, Universitas Sriwijaya, Palembang 30139, Indonesia; (M.N.R.); (A.I.S.); (A.D.); (B.T.) (F.F.)
| | - Bambang Tutuko
- Intelligent System Research Group, Faculty of Computer Science, Universitas Sriwijaya, Palembang 30139, Indonesia; (M.N.R.); (A.I.S.); (A.D.); (B.T.) (F.F.)
| | - Firdaus Firdaus
- Intelligent System Research Group, Faculty of Computer Science, Universitas Sriwijaya, Palembang 30139, Indonesia; (M.N.R.); (A.I.S.); (A.D.); (B.T.) (F.F.)
| | | | - Nuswil Bernolian
- Division of Maternal-Fetal Medicine, Department of Obstetrics and Gynecology, Mohammad Hoesin General Hospital, Palembang 30126, Indonesia;
| |
Collapse
|