1
|
Wang X, Wang L, Hou X, Li J, Li J, Ma X. Comparison and analysis of deep learning models for discriminating longitudinal and oblique vaginal septa based on ultrasound imaging. BMC Med Imaging 2024; 24:347. [PMID: 39716160 DOI: 10.1186/s12880-024-01507-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2024] [Accepted: 11/19/2024] [Indexed: 12/25/2024] Open
Abstract
BACKGROUND The longitudinal vaginal septum and oblique vaginal septum are female müllerian duct anomalies that are relatively less diagnosed but severely fertility-threatening in clinical practice. Ultrasound imaging is commonly used to examine the two vaginal malformations, but in fact it's difficult to make an accurate differential diagnosis. This study is intended to assess the performance of multiple deep learning models based on ultrasonographic images for distinguishing longitudinal vaginal septum and oblique vaginal septum. METHODS The cases and ultrasound images of longitudinal vaginal septum and oblique vaginal septum were collected. Two convolutional neural network (CNN)-based models (ResNet50 and ConvNeXt-B) and one base resolution variant of vision transformer (ViT)-based neural network (ViT-B/16) were selected to construct ultrasonographic classification models. The receiver operating curve analysis and four indicators including accuracy, sensitivity, specificity and area under the curve (AUC) were used to compare the diagnostic performance of deep learning models. RESULTS A total of 70 cases with 426 ultrasound images were included for deep learning models construction using 5-fold cross-validation. Convolutional neural network-based models (ResNet50 and ConvNeXt-B) presented significantly better case-level discriminative efficacy with accuracy of 0.842 (variance, 0.004, 95%CI, [0.639-0.997]) and 0.897 (variance, 0.004, [95%CI, 0.734-1.000]), specificity of 0.709 (variance, 0.041, [95%CI, 0.505-0.905]) and 0.811 (variance, 0.017, [95%CI, 0.622-0.979]), and AUC of 0.842 (variance, 0.004, [95%CI, 0.639-0.997]) and 0.897 (variance, 0.004, [95%CI, 0.734-1.000]) than transformer-based model (ViT-B/16) with its accuracy of 0.668 (variance, 0.014, [95%CI, 0.407-0.920]), specificity of 0.572 (variance, 0.024, [95%CI, 0.304-0.831]) and AUC of 0.681 (variance, 0.030, [95%CI, 0.434-0.908]). There was no significance of AUC between ConvNeXt-B and ResNet50 (P = 0.841). CONCLUSIONS Convolutional neural network-based model (ConvNeXt-B) shows promising capability of discriminating longitudinal and oblique vaginal septa ultrasound images and is expected to be introduced to clinical ultrasonographic diagnostic system.
Collapse
Affiliation(s)
- Xiangyu Wang
- Department of Obstetrics and Gynecology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, No.1095 Jiefang Avenue, Wuhan, Hubei Province, 430030, China
| | - Liang Wang
- Computer Center, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, No. 1095 Jiefang Avenue, Wuhan, Hubei Province, 430030, China
| | - Xin Hou
- Department of Obstetrics and Gynecology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, No.1095 Jiefang Avenue, Wuhan, Hubei Province, 430030, China
| | - Jingfang Li
- Department of Obstetrics and Gynecology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, No.1095 Jiefang Avenue, Wuhan, Hubei Province, 430030, China
| | - Jin Li
- Computer Center, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, No. 1095 Jiefang Avenue, Wuhan, Hubei Province, 430030, China.
| | - Xiangyi Ma
- Department of Obstetrics and Gynecology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, No.1095 Jiefang Avenue, Wuhan, Hubei Province, 430030, China.
| |
Collapse
|
2
|
Li F, Li P, Liu Z, Liu S, Zeng P, Song H, Liu P, Lyu G. Application of artificial intelligence in VSD prenatal diagnosis from fetal heart ultrasound images. BMC Pregnancy Childbirth 2024; 24:758. [PMID: 39550543 PMCID: PMC11568577 DOI: 10.1186/s12884-024-06916-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Accepted: 10/21/2024] [Indexed: 11/18/2024] Open
Abstract
BACKGROUND Developing a combined artificial intelligence (AI) and ultrasound imaging to provide an accurate, objective, and efficient adjunctive diagnostic approach for fetal heart ventricular septal defects (VSD). METHODS 1,451 fetal heart ultrasound images from 500 pregnant women were comprehensively analyzed between January 2016 and June 2022. The fetal heart region was manually labeled and the presence of VSD was discriminated by experts. The principle of five-fold cross-validation was followed in the training set to develop the AI model to assist in the diagnosis of VSD. The model was evaluated in the test set using metrics such as mAP@0.5, precision, recall, and F1 score. The diagnostic accuracy and inference time were also compared with junior doctors, intermediate doctors, and senior doctors. RESULTS The mAP@0.5, precision, recall, and F1 scores for the AI model diagnosis of VSD were 0.926, 0.879, 0.873, and 0.88, respectively. The accuracy of junior doctors and intermediate doctors improved by 6.7% and 2.8%, respectively, with the assistance of this system. CONCLUSIONS This study reports an AI-assisted diagnostic method for VSD that has a high agreement with manual recognition. It also has a low number of parameters and computational complexity, which can also improve the diagnostic accuracy and speed of some physicians for VSD.
Collapse
Affiliation(s)
- Furong Li
- School of Information Science & Engineering, Lanzhou University, Lanzhou, 730000, China
- College of Physics and Electronic Engineering, Northwest Normal University, Lanzhou, 730070, China
| | - Ping Li
- Department of Gynecology and Obstetrics, The First Hospital of Quanzhou Affiliated to Fujian Medical University, Quanzhou, China
| | - Zhonghua Liu
- Department of Ultrasound, The First Hospital of Quanzhou Affiliated to Fujian Medical University, Quanzhou, China
| | - Shunlan Liu
- Department of Ultrasound, The Second Hospital of Quanzhou Affiliated to Fujian Medical University, Quanzhou, China
| | - Pan Zeng
- College of Medicine, Huaqiao University, Quanzhou, 362021, China
| | - Haisheng Song
- College of Physics and Electronic Engineering, Northwest Normal University, Lanzhou, 730070, China
| | - Peizhong Liu
- College of Medicine, Huaqiao University, Quanzhou, 362021, China.
| | - Guorong Lyu
- Department of Ultrasound, The Second Hospital of Quanzhou Affiliated to Fujian Medical University, Quanzhou, China.
| |
Collapse
|
3
|
Ibrahim NM, Alanize H, Alqahtani L, Alqahtani LJ, Alabssi R, Alsindi W, Alabssi H, AlMuhanna A, Althani H. Deep Learning Approaches for the Assessment of Germinal Matrix Hemorrhage Using Neonatal Head Ultrasound. SENSORS (BASEL, SWITZERLAND) 2024; 24:7052. [PMID: 39517949 PMCID: PMC11548650 DOI: 10.3390/s24217052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/14/2024] [Revised: 10/17/2024] [Accepted: 10/29/2024] [Indexed: 11/16/2024]
Abstract
Germinal matrix hemorrhage (GMH) is a critical condition affecting premature infants, commonly diagnosed through cranial ultrasound imaging. This study presents an advanced deep learning approach for automated GMH grading using the YOLOv8 model. By analyzing a dataset of 586 infants, we classified ultrasound images into five distinct categories: Normal, Grade 1, Grade 2, Grade 3, and Grade 4. Utilizing transfer learning and data augmentation techniques, the YOLOv8 model achieved exceptional performance, with a mean average precision (mAP50) of 0.979 and a mAP50-95 of 0.724. These results indicate that the YOLOv8 model can significantly enhance the accuracy and efficiency of GMH diagnosis, providing a valuable tool to support radiologists in clinical settings.
Collapse
Affiliation(s)
- Nehad M. Ibrahim
- Departments of Computer Science, College of Computer Science and Information Technology, Imam Abdulrahman Bin Faisal University, Dammam 31451, Saudi Arabia (L.A.)
| | - Hadeel Alanize
- Departments of Computer Science, College of Computer Science and Information Technology, Imam Abdulrahman Bin Faisal University, Dammam 31451, Saudi Arabia (L.A.)
| | - Lara Alqahtani
- Departments of Computer Science, College of Computer Science and Information Technology, Imam Abdulrahman Bin Faisal University, Dammam 31451, Saudi Arabia (L.A.)
| | - Lama J. Alqahtani
- Departments of Computer Science, College of Computer Science and Information Technology, Imam Abdulrahman Bin Faisal University, Dammam 31451, Saudi Arabia (L.A.)
| | - Raghad Alabssi
- Departments of Computer Science, College of Computer Science and Information Technology, Imam Abdulrahman Bin Faisal University, Dammam 31451, Saudi Arabia (L.A.)
| | - Wadha Alsindi
- Departments of Computer Science, College of Computer Science and Information Technology, Imam Abdulrahman Bin Faisal University, Dammam 31451, Saudi Arabia (L.A.)
| | - Haila Alabssi
- College of Medicine, Imam Abdulrahman Bin Faisal University, Dammam 31441, Saudi Arabia
| | - Afnan AlMuhanna
- Department of Radiology, King Fahad University Hospital, Khobar 34445, Saudi Arabia
| | - Hanadi Althani
- Department of Radiology, King Fahad University Hospital, Khobar 34445, Saudi Arabia
| |
Collapse
|
4
|
Lan L, Luo D, Lian J, She L, Zhang B, Zhong H, Wang H, Wu H. Chromosomal Abnormalities Detected by Chromosomal Microarray Analysis and Karyotype in Fetuses with Ultrasound Abnormalities. Int J Gen Med 2024; 17:4645-4658. [PMID: 39429961 PMCID: PMC11488349 DOI: 10.2147/ijgm.s483290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2024] [Accepted: 10/01/2024] [Indexed: 10/22/2024] Open
Abstract
Objective Chromosomal microarray analysis (CMA) is a first-line test to assess the genetic etiology of fetal ultrasound abnormalities. The aim of this study was to evaluate the effectiveness of CMA in detecting chromosomal abnormalities in fetuses with ultrasound abnormalities, including structural abnormalities and non-structural abnormalities. Methods A retrospective study was conducted on 368 fetuses with abnormal ultrasound who received interventional prenatal diagnosis at Meizhou People's Hospital from October 2022 to December 2023. Samples of villi, amniotic fluid, and umbilical cord blood were collected according to different gestational weeks, and karyotype and CMA analyses were performed. The detection rate of chromosomal abnormalities in different ultrasonic abnormalities was analyzed. Results There were 368 fetuses with abnormal ultrasound, including 114 (31.0%) with structural abnormalities, 225 (61.1%) with non-structural abnormalities, and 29 (7.9%) with structural combined with non-structural abnormalities. The detection rate of aneuploidy and pathogenic (P)/likely pathogenic (LP) copy number variations (CNVs) of CMA in fetuses with structural abnormalities was 5.26% (6/114), the detection rate of karyotype was 2.63% (3/114), and the additional diagnosis rate of CMA was 2.63%. In the fetuses with ultrasonic non-structural abnormalities, the detection rate of karyotype was 6.22% (14/225), the detection rate of aneuploidy and P/LP CNVs in fetuses with ultrasonic structural abnormalities was 9.33% (21/225), and the additional diagnosis rate of CMA was 3.11%. There was no significant difference in chromosome abnormality detection rate of CMA among structural abnormality, non-structural abnormality, and structural abnormality combined with non-structural abnormality groups (5.3%, 9.3%, and 13.8%, p = 0.241), also among multiple ultrasonic abnormality and single ultrasonic abnormality groups (14.8%, and 7.3%, p = 0.105). Conclusion CMA can significantly improve the detection rate of genetic abnormalities in prenatal diagnosis of ultrasonic abnormal fetuses compared with karyotype analysis. CMA is a more effective tool than karyotyping alone in detecting chromosomal abnormalities in fetuses with ultrasound abnormalities.
Collapse
Affiliation(s)
- Liubing Lan
- Department of Prenatal Diagnostic Center, Meizhou People’s Hospital, Meizhou, People’s Republic of China
- Department of Obstetrics, Meizhou People’s Hospital, Meizhou, People’s Republic of China
| | - Dandan Luo
- Department of Prenatal Diagnostic Center, Meizhou People’s Hospital, Meizhou, People’s Republic of China
- Department of Obstetrics, Meizhou People’s Hospital, Meizhou, People’s Republic of China
| | - Jianwen Lian
- Department of Prenatal Diagnostic Center, Meizhou People’s Hospital, Meizhou, People’s Republic of China
| | - Lingna She
- Department of Prenatal Diagnostic Center, Meizhou People’s Hospital, Meizhou, People’s Republic of China
- Department of Ultrasound, Meizhou People’s Hospital, Meizhou, People’s Republic of China
| | - Bosen Zhang
- Department of Prenatal Diagnostic Center, Meizhou People’s Hospital, Meizhou, People’s Republic of China
- Department of Ultrasound, Meizhou People’s Hospital, Meizhou, People’s Republic of China
| | - Hua Zhong
- Department of Prenatal Diagnostic Center, Meizhou People’s Hospital, Meizhou, People’s Republic of China
| | - Huaxian Wang
- Department of Prenatal Diagnostic Center, Meizhou People’s Hospital, Meizhou, People’s Republic of China
| | - Heming Wu
- Department of Prenatal Diagnostic Center, Meizhou People’s Hospital, Meizhou, People’s Republic of China
| |
Collapse
|
5
|
Li Y, Cai P, Huang Y, Yu W, Liu Z, Liu P. Deep learning based detection and classification of fetal lip in ultrasound images. J Perinat Med 2024; 52:769-777. [PMID: 39028804 DOI: 10.1515/jpm-2024-0122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Accepted: 07/07/2024] [Indexed: 07/21/2024]
Abstract
OBJECTIVES Fetal cleft lip is a common congenital defect. Considering the delicacy and difficulty of observing fetal lips, we have utilized deep learning technology to develop a new model aimed at quickly and accurately assessing the development of fetal lips during prenatal examinations. This model can detect ultrasound images of the fetal lips and classify them, aiming to provide a more objective prediction for the development of fetal lips. METHODS This study included 632 pregnant women in their mid-pregnancy stage, who underwent ultrasound examinations of the fetal lips, collecting both normal and abnormal fetal lip ultrasound images. To improve the accuracy of the detection and classification of fetal lips, we proposed and validated the Yolov5-ECA model. RESULTS The experimental results show that, compared with the currently popular 10 models, our model achieved the best results in the detection and classification of fetal lips. In terms of the detection of fetal lips, the mean average precision (mAP) at 0.5 and mAP at 0.5:0.95 were 0.920 and 0.630, respectively. In the classification of fetal lip ultrasound images, the accuracy reached 0.925. CONCLUSIONS The deep learning algorithm has accuracy consistent with manual evaluation in the detection and classification process of fetal lips. This automated recognition technology can provide a powerful tool for inexperienced young doctors, helping them to accurately conduct examinations and diagnoses of fetal lips.
Collapse
Affiliation(s)
- Yapeng Li
- School of Medicine, Huaqiao University, Quanzhou, China
| | - Peiya Cai
- Department of Gynecology and Obstetrics, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China
| | - Yubing Huang
- Department of Ultrasound, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China
| | - Weifeng Yu
- Department of Ultrasound, Quanzhou First Hospital Affiliated to Fujian Medical University, Quanzhou, China
| | - Zhonghua Liu
- Department of Ultrasound, Quanzhou First Hospital Affiliated to Fujian Medical University, Quanzhou, China
| | - Peizhong Liu
- School of Medicine, Huaqiao University, Quanzhou, China
- College of Engineering, Huaqiao University, Quanzhou, China
| |
Collapse
|
6
|
Weichert J, Scharf JL. Advancements in Artificial Intelligence for Fetal Neurosonography: A Comprehensive Review. J Clin Med 2024; 13:5626. [PMID: 39337113 PMCID: PMC11432922 DOI: 10.3390/jcm13185626] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2024] [Revised: 09/04/2024] [Accepted: 09/16/2024] [Indexed: 09/30/2024] Open
Abstract
The detailed sonographic assessment of the fetal neuroanatomy plays a crucial role in prenatal diagnosis, providing valuable insights into timely, well-coordinated fetal brain development and detecting even subtle anomalies that may impact neurodevelopmental outcomes. With recent advancements in artificial intelligence (AI) in general and medical imaging in particular, there has been growing interest in leveraging AI techniques to enhance the accuracy, efficiency, and clinical utility of fetal neurosonography. The paramount objective of this focusing review is to discuss the latest developments in AI applications in this field, focusing on image analysis, the automation of measurements, prediction models of neurodevelopmental outcomes, visualization techniques, and their integration into clinical routine.
Collapse
Affiliation(s)
- Jan Weichert
- Division of Prenatal Medicine, Department of Gynecology and Obstetrics, University Hospital of Schleswig-Holstein, Ratzeburger Allee 160, 23538 Luebeck, Germany;
- Elbe Center of Prenatal Medicine and Human Genetics, Willy-Brandt-Str. 1, 20457 Hamburg, Germany
| | - Jann Lennard Scharf
- Division of Prenatal Medicine, Department of Gynecology and Obstetrics, University Hospital of Schleswig-Holstein, Ratzeburger Allee 160, 23538 Luebeck, Germany;
| |
Collapse
|
7
|
Ferreira I, Simões J, Pereira B, Correia J, Areia AL. Ensemble learning for fetal ultrasound and maternal-fetal data to predict mode of delivery after labor induction. Sci Rep 2024; 14:15275. [PMID: 38961231 PMCID: PMC11222528 DOI: 10.1038/s41598-024-65394-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Accepted: 06/19/2024] [Indexed: 07/05/2024] Open
Abstract
Providing adequate counseling on mode of delivery after induction of labor (IOL) is of utmost importance. Various AI algorithms have been developed for this purpose, but rely on maternal-fetal data, not including ultrasound (US) imaging. We used retrospectively collected clinical data from 808 subjects submitted to IOL, totaling 2024 US images, to train AI models to predict vaginal delivery (VD) and cesarean section (CS) outcomes after IOL. The best overall model used only clinical data (F1-score: 0.736; positive predictive value (PPV): 0.734). The imaging models employed fetal head, abdomen and femur US images, showing limited discriminative results. The best model used femur images (F1-score: 0.594; PPV: 0.580). Consequently, we constructed ensemble models to test whether US imaging could enhance the clinical data model. The best ensemble model included clinical data and US femur images (F1-score: 0.689; PPV: 0.693), presenting a false positive and false negative interesting trade-off. The model accurately predicted CS on 4 additional cases, despite misclassifying 20 additional VD, resulting in a 6.0% decrease in average accuracy compared to the clinical data model. Hence, integrating US imaging into the latter model can be a new development in assisting mode of delivery counseling.
Collapse
Affiliation(s)
- Iolanda Ferreira
- Faculty of Medicine of University of Coimbra, Obstetrics Department, University and Hospitalar Centre of Coimbra, Coimbra, Portugal.
- Maternidade Doutor Daniel de Matos, R. Miguel Torga, 3030-165, Coimbra, Portugal.
| | - Joana Simões
- Department of Informatics Engineering, Centre for Informatics and Systems of the University of Coimbra, University of Coimbra, Coimbra, Portugal
| | - Beatriz Pereira
- Department of Physics, University of Coimbra, Coimbra, Portugal
| | - João Correia
- Department of Informatics Engineering, Centre for Informatics and Systems of the University of Coimbra, University of Coimbra, Coimbra, Portugal
| | - Ana Luísa Areia
- Faculty of Medicine of University of Coimbra, Obstetrics Department, University and Hospitalar Centre of Coimbra, Coimbra, Portugal
| |
Collapse
|
8
|
Drukker L. The Holy Grail of obstetric ultrasound: can artificial intelligence detect hard-to-identify fetal cardiac anomalies? ULTRASOUND IN OBSTETRICS & GYNECOLOGY : THE OFFICIAL JOURNAL OF THE INTERNATIONAL SOCIETY OF ULTRASOUND IN OBSTETRICS AND GYNECOLOGY 2024; 64:5-9. [PMID: 38949769 DOI: 10.1002/uog.27703] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Accepted: 04/18/2024] [Indexed: 07/02/2024]
Abstract
Linked article: This Editorial comments on articles by Day et al. and Taksøe‐Vester et al.
Collapse
Affiliation(s)
- L Drukker
- Women's Ultrasound, Department of Obstetrics and Gynecology, Rabin-Beilinson Medical Center, School of Medicine, Faculty of Medical and Health Sciences, Tel-Aviv University, Tel Aviv, Israel
- Oxford Maternal & Perinatal Health Institute (OMPHI), University of Oxford, Oxford, UK
| |
Collapse
|
9
|
Zhang J, Dawkins A. Artificial Intelligence in Ultrasound Imaging: Where Are We Now? Ultrasound Q 2024; 40:93-97. [PMID: 38842384 DOI: 10.1097/ruq.0000000000000680] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/07/2024]
Affiliation(s)
- Jie Zhang
- From the Department of Radiology, University of Kentucky, Lexington, KY
| | | |
Collapse
|
10
|
Ou Z, Bai J, Chen Z, Lu Y, Wang H, Long S, Chen G. RTSeg-net: A lightweight network for real-time segmentation of fetal head and pubic symphysis from intrapartum ultrasound images. Comput Biol Med 2024; 175:108501. [PMID: 38703545 DOI: 10.1016/j.compbiomed.2024.108501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Revised: 03/19/2024] [Accepted: 04/21/2024] [Indexed: 05/06/2024]
Abstract
The segmentation of the fetal head (FH) and pubic symphysis (PS) from intrapartum ultrasound images plays a pivotal role in monitoring labor progression and informing crucial clinical decisions. Achieving real-time segmentation with high accuracy on systems with limited hardware capabilities presents significant challenges. To address these challenges, we propose the real-time segmentation network (RTSeg-Net), a groundbreaking lightweight deep learning model that incorporates innovative distribution shifting convolutional blocks, tokenized multilayer perceptron blocks, and efficient feature fusion blocks. Designed for optimal computational efficiency, RTSeg-Net minimizes resource demand while significantly enhancing segmentation performance. Our comprehensive evaluation on two distinct intrapartum ultrasound image datasets reveals that RTSeg-Net achieves segmentation accuracy on par with more complex state-of-the-art networks, utilizing merely 1.86 M parameters-just 6 % of their hyperparameters-and operating seven times faster, achieving a remarkable rate of 31.13 frames per second on a Jetson Nano, a device known for its limited computing capacity. These achievements underscore RTSeg-Net's potential to provide accurate, real-time segmentation on low-power devices, broadening the scope for its application across various stages of labor. By facilitating real-time, accurate ultrasound image analysis on portable, low-cost devices, RTSeg-Net promises to revolutionize intrapartum monitoring, making sophisticated diagnostic tools accessible to a wider range of healthcare settings.
Collapse
Affiliation(s)
- Zhanhong Ou
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China
| | - Jieyun Bai
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China; Auckland Bioengineering Institute, University of Auckland, Auckland, 1010, New Zealand.
| | - Zhide Chen
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China
| | - Yaosheng Lu
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China
| | - Huijin Wang
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China
| | - Shun Long
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China
| | - Gaowen Chen
- Obstetrics and Gynecology Center, Zhujiang Hospital, Southern Medical University, Guangzhou, 510280, China
| |
Collapse
|
11
|
Bai J, Lu Y, Liu H, He F, Guo X. Editorial: New technologies improve maternal and newborn safety. FRONTIERS IN MEDICAL TECHNOLOGY 2024; 6:1372358. [PMID: 38872737 PMCID: PMC11169838 DOI: 10.3389/fmedt.2024.1372358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Accepted: 05/17/2024] [Indexed: 06/15/2024] Open
Affiliation(s)
- Jieyun Bai
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, China
- Auckland Bioengineering Institute, University of Auckland, Auckland, New Zealand
| | - Yaosheng Lu
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, China
| | - Huishu Liu
- Guangzhou Women and Children’s Medical Center, Guangzhou Medical University, Guangzhou, China
| | - Fang He
- Department of Obstetrics and Gynecology, Third Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Xiaohui Guo
- Department of Obstetrics, Shenzhen People’s Hospital, Shenzhen, China
| |
Collapse
|
12
|
Chen G, Bai J, Ou Z, Lu Y, Wang H. PSFHS: Intrapartum ultrasound image dataset for AI-based segmentation of pubic symphysis and fetal head. Sci Data 2024; 11:436. [PMID: 38698003 PMCID: PMC11066050 DOI: 10.1038/s41597-024-03266-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Accepted: 04/16/2024] [Indexed: 05/05/2024] Open
Abstract
During the process of labor, the intrapartum transperineal ultrasound examination serves as a valuable tool, allowing direct observation of the relative positional relationship between the pubic symphysis and fetal head (PSFH). Accurate assessment of fetal head descent and the prediction of the most suitable mode of delivery heavily rely on this relationship. However, achieving an objective and quantitative interpretation of the ultrasound images necessitates precise PSFH segmentation (PSFHS), a task that is both time-consuming and demanding. Integrating the potential of artificial intelligence (AI) in the field of medical ultrasound image segmentation, the development and evaluation of AI-based models rely significantly on access to comprehensive and meticulously annotated datasets. Unfortunately, publicly accessible datasets tailored for PSFHS are notably scarce. Bridging this critical gap, we introduce a PSFHS dataset comprising 1358 images, meticulously annotated at the pixel level. The annotation process adhered to standardized protocols and involved collaboration among medical experts. Remarkably, this dataset stands as the most expansive and comprehensive resource for PSFHS to date.
Collapse
Affiliation(s)
- Gaowen Chen
- Obstetrics and Gynecology Center, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Jieyun Bai
- Department of Electronic Engineering, College of Information Science and Technology, Jinan University, Guangzhou, China.
- Auckland Bioengineering Institute, the University of Auckland, Auckland, New Zealand.
| | - Zhanhong Ou
- Department of Electronic Engineering, College of Information Science and Technology, Jinan University, Guangzhou, China
| | - Yaosheng Lu
- Department of Electronic Engineering, College of Information Science and Technology, Jinan University, Guangzhou, China.
| | - Huijin Wang
- Department of Electronic Engineering, College of Information Science and Technology, Jinan University, Guangzhou, China
| |
Collapse
|
13
|
Calhoun BC, Uselman H, Olle EW. Development of Artificial Intelligence Image Classification Models for Determination of Umbilical Cord Vascular Anomalies. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2024; 43:881-897. [PMID: 38279605 DOI: 10.1002/jum.16418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Revised: 01/05/2024] [Accepted: 01/07/2024] [Indexed: 01/28/2024]
Abstract
OBJECTIVE The goal of this work was to develop robust techniques for the processing and identification of SUA using artificial intelligence (AI) image classification models. METHODS Ultrasound images obtained retrospectively were analyzed for blinding, text removal, AI training, and image prediction. After developing and testing text removal methods, a small n-size study (40 images) using fastai/PyTorch to classify umbilical cord images. This data set was expanded to 286 lateral-CFI images that were used to compare: different neural network performance, diagnostic value, and model predictions. RESULTS AI-Optical Character Recognition method was superior in its ability to remove text from images. The small n-size mixed single umbilical artery determination data set was tested with a pretrained ResNet34 neural network and obtained and error rate average of 0.083 (n = 3). The expanded data set was then tested with several AI models. The majority of the tested networks were able to obtain an average error rate of <0.15 with minimal modifications. The ResNet34-default performed the best with: an image-classification error rate of 0.0175, sensitivity of 1.00, specificity of 0.97, and ability to correctly infer classification. CONCLUSION This work provides a robust framework for ultrasound image AI classifications. AI could successfully classify umbilical cord types of ultrasound image study with excellent diagnostic value. Together this study provides a reproducible framework to develop AI-specific ultrasound classification of umbilical cord or other diagnoses to be used in conjunction with physicians for optimal patient care.
Collapse
Affiliation(s)
- Byron C Calhoun
- Department of Obstetrics and Gynecology, WVU School of Medicine, Charleston Division, Charleston, West Virginia, USA
- Maternal-Fetal Medicine, WVU School of Medicine, Charleston Division, Charleston, West Virginia, USA
| | - Heather Uselman
- Resident Department of Obstetrics and Gynecology, Charleston Area Medical Center, Charleston, West Virginia, USA
| | - Eric W Olle
- Research and Development, SynXBio Inc., Charleston, West Virginia, USA
| |
Collapse
|
14
|
Taksoee-Vester CA, Mikolaj K, Bashir Z, Christensen AN, Petersen OB, Sundberg K, Feragen A, Svendsen MBS, Nielsen M, Tolsgaard MG. AI supported fetal echocardiography with quality assessment. Sci Rep 2024; 14:5809. [PMID: 38461322 PMCID: PMC10925034 DOI: 10.1038/s41598-024-56476-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Accepted: 03/06/2024] [Indexed: 03/11/2024] Open
Abstract
This study aimed to develop a deep learning model to assess the quality of fetal echocardiography and to perform prospective clinical validation. The model was trained on data from the 18-22-week anomaly scan conducted in seven hospitals from 2008 to 2018. Prospective validation involved 100 patients from two hospitals. A total of 5363 images from 2551 pregnancies were used for training and validation. The model's segmentation accuracy depended on image quality measured by a quality score (QS). It achieved an overall average accuracy of 0.91 (SD 0.09) across the test set, with images having above-average QS scoring 0.97 (SD 0.03). During prospective validation of 192 images, clinicians rated 44.8% (SD 9.8) of images as equal in quality, 18.69% (SD 5.7) favoring auto-captured images and 36.51% (SD 9.0) preferring manually captured ones. Images with above average QS showed better agreement on segmentations (p < 0.001) and QS (p < 0.001) with fetal medicine experts. Auto-capture saved additional planes beyond protocol requirements, resulting in more comprehensive echocardiographies. Low QS had adverse effect on both model performance and clinician's agreement with model feedback. The findings highlight the importance of developing and evaluating AI models based on 'noisy' real-life data rather than pursuing the highest accuracy possible with retrospective academic-grade data.
Collapse
Affiliation(s)
- Caroline A Taksoee-Vester
- Department of Clinical Medicine, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark.
- Center of Fetal Medicine, Department of Obstetrics, Copenhagen University Hospital, Rigshospitalet, Blegdamsvej 9, Dept. 4071, 2100, Copenhagen, Denmark.
- Copenhagen Academy of Medical Education and Simulation (CAMES), Rigshospitalet, Copenhagen, Denmark.
| | - Kamil Mikolaj
- DTU Compute, Technical University of Denmark (DTU), Lyngby, Denmark
| | - Zahra Bashir
- Department of Clinical Medicine, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
- Copenhagen Academy of Medical Education and Simulation (CAMES), Rigshospitalet, Copenhagen, Denmark
- Center for Fetal Medicine, Department of Obstetrics, Slagelse Hospital, Slagelse, Denmark
| | | | - Olav B Petersen
- Department of Clinical Medicine, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
- Center of Fetal Medicine, Department of Obstetrics, Copenhagen University Hospital, Rigshospitalet, Blegdamsvej 9, Dept. 4071, 2100, Copenhagen, Denmark
| | - Karin Sundberg
- Center of Fetal Medicine, Department of Obstetrics, Copenhagen University Hospital, Rigshospitalet, Blegdamsvej 9, Dept. 4071, 2100, Copenhagen, Denmark
| | - Aasa Feragen
- DTU Compute, Technical University of Denmark (DTU), Lyngby, Denmark
| | - Morten B S Svendsen
- Copenhagen Academy of Medical Education and Simulation (CAMES), Rigshospitalet, Copenhagen, Denmark
| | - Mads Nielsen
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Martin G Tolsgaard
- Department of Clinical Medicine, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
- Center of Fetal Medicine, Department of Obstetrics, Copenhagen University Hospital, Rigshospitalet, Blegdamsvej 9, Dept. 4071, 2100, Copenhagen, Denmark
- Copenhagen Academy of Medical Education and Simulation (CAMES), Rigshospitalet, Copenhagen, Denmark
| |
Collapse
|
15
|
Sivera R, Clark AE, Dall'Asta A, Ghi T, Schievano S, Lees CC. Fetal face shape analysis from prenatal 3D ultrasound images. Sci Rep 2024; 14:4411. [PMID: 38388522 PMCID: PMC10884000 DOI: 10.1038/s41598-023-50386-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Accepted: 12/19/2023] [Indexed: 02/24/2024] Open
Abstract
3D ultrasound imaging of fetal faces has been predominantly confined to qualitative assessment. Many genetic conditions evade diagnosis and identification could assist with parental counselling, pregnancy management and neonatal care planning. We describe a methodology to build a shape model of the third trimester fetal face from 3D ultrasound and show how it can objectively describe morphological features and gestational-age related changes of normal fetal faces. 135 fetal face 3D ultrasound volumes (117 appropriately grown, 18 growth-restricted) of 24-34 weeks gestation were included. A 3D surface model of each face was obtained using a semi-automatic segmentation workflow. Size normalisation and rescaling was performed using a growth model giving the average size at every gestation. The model demonstrated a similar growth rate to standard head circumference reference charts. A landmark-free morphometry model was estimated to characterize shape differences using non-linear deformations of an idealized template face. Advancing gestation is associated with widening/fullness of the cheeks, contraction of the chin and deepening of the eyes. Fetal growth restriction is associated with a smaller average facial size but no morphological differences. This model may eventually be used as a reference to assist in the prenatal diagnosis of congenital anomalies with characteristic facial dysmorphisms.
Collapse
Affiliation(s)
- Raphael Sivera
- Institute of Cardiovascular Science, University College London, London, UK
| | - Anna E Clark
- Institute of Reproductive and Development Biology, Department of Metabolism, Digestion and Reproduction, Imperial College London, London, UK
| | - Andrea Dall'Asta
- Department of Medicine and Surgery, University of Parma, Parma, Italy
| | - Tullio Ghi
- Department of Medicine and Surgery, University of Parma, Parma, Italy
| | - Silvia Schievano
- Institute of Cardiovascular Science, University College London, London, UK
| | - Christoph C Lees
- Institute of Reproductive and Development Biology, Department of Metabolism, Digestion and Reproduction, Imperial College London, London, UK.
| |
Collapse
|
16
|
Lonsdale H, Gray GM, Ahumada LM, Matava CT. Machine Vision and Image Analysis in Anesthesia: Narrative Review and Future Prospects. Anesth Analg 2023; 137:830-840. [PMID: 37712476 PMCID: PMC11495405 DOI: 10.1213/ane.0000000000006679] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/16/2023]
Abstract
Machine vision describes the use of artificial intelligence to interpret, analyze, and derive predictions from image or video data. Machine vision-based techniques are already in clinical use in radiology, ophthalmology, and dermatology, where some applications currently equal or exceed the performance of specialty physicians in areas of image interpretation. While machine vision in anesthesia has many potential applications, its development remains in its infancy in our specialty. Early research for machine vision in anesthesia has focused on automated recognition of anatomical structures during ultrasound-guided regional anesthesia or line insertion; recognition of the glottic opening and vocal cords during video laryngoscopy; prediction of the difficult airway using facial images; and clinical alerts for endobronchial intubation detected on chest radiograph. Current machine vision applications measuring the distance between endotracheal tube tip and carina have demonstrated noninferior performance compared to board-certified physicians. The performance and potential uses of machine vision for anesthesia will only grow with the advancement of underlying machine vision algorithm technical performance developed outside of medicine, such as convolutional neural networks and transfer learning. This article summarizes recently published works of interest, provides a brief overview of techniques used to create machine vision applications, explains frequently used terms, and discusses challenges the specialty will encounter as we embrace the advantages that this technology may bring to future clinical practice and patient care. As machine vision emerges onto the clinical stage, it is critically important that anesthesiologists are prepared to confidently assess which of these devices are safe, appropriate, and bring added value to patient care.
Collapse
Affiliation(s)
- Hannah Lonsdale
- Department of Anesthesiology, Division of Pediatric Anesthesiology, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Geoffrey M. Gray
- Center for Pediatric Data Science and Analytics Methodology, Johns Hopkins All Children’s Hospital, St. Petersburg, Florida, USA
| | - Luis M. Ahumada
- Center for Pediatric Data Science and Analytics Methodology, Johns Hopkins All Children’s Hospital, St. Petersburg, Florida, USA
| | - Clyde T. Matava
- Department of Anesthesia and Pain Medicine, The Hospital for Sick Children, Toronto, ON, Canada
- Department of Anesthesiology and Pain Medicine, Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
17
|
Werner H, Santos IF, Giraldi GA, Lopes J, Ribeiro G, Lopes FP. Fetal magnetic resonance imaging artifacts: role of deep learning to improve imaging. ULTRASOUND IN OBSTETRICS & GYNECOLOGY : THE OFFICIAL JOURNAL OF THE INTERNATIONAL SOCIETY OF ULTRASOUND IN OBSTETRICS AND GYNECOLOGY 2023; 62:302-303. [PMID: 36840982 DOI: 10.1002/uog.26185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Revised: 02/08/2023] [Accepted: 02/15/2023] [Indexed: 06/18/2023]
Affiliation(s)
- H Werner
- Instituto de Ensino e Pesquisa, Dasa (IEPD), Brazil
- BiodesignLab Dasa/PUC-Rio, Rio de Janeiro, Brazil
| | - I Félix Santos
- Laboratório Nacional de Computação Científica, Petrópolis, Rio de Janeiro, Brazil
| | - G A Giraldi
- Laboratório Nacional de Computação Científica, Petrópolis, Rio de Janeiro, Brazil
| | - J Lopes
- BiodesignLab Dasa/PUC-Rio, Rio de Janeiro, Brazil
| | - G Ribeiro
- BiodesignLab Dasa/PUC-Rio, Rio de Janeiro, Brazil
| | - F P Lopes
- Instituto de Ensino e Pesquisa, Dasa (IEPD), Brazil
- BiodesignLab Dasa/PUC-Rio, Rio de Janeiro, Brazil
| |
Collapse
|