1
|
Zhang J, Dawkins A. Artificial Intelligence in Ultrasound Imaging: Where Are We Now? Ultrasound Q 2024; 40:93-97. [PMID: 38842384 DOI: 10.1097/ruq.0000000000000680] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/07/2024]
Affiliation(s)
- Jie Zhang
- From the Department of Radiology, University of Kentucky, Lexington, KY
| | | |
Collapse
|
2
|
Ou Z, Bai J, Chen Z, Lu Y, Wang H, Long S, Chen G. RTSeg-net: A lightweight network for real-time segmentation of fetal head and pubic symphysis from intrapartum ultrasound images. Comput Biol Med 2024; 175:108501. [PMID: 38703545 DOI: 10.1016/j.compbiomed.2024.108501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Revised: 03/19/2024] [Accepted: 04/21/2024] [Indexed: 05/06/2024]
Abstract
The segmentation of the fetal head (FH) and pubic symphysis (PS) from intrapartum ultrasound images plays a pivotal role in monitoring labor progression and informing crucial clinical decisions. Achieving real-time segmentation with high accuracy on systems with limited hardware capabilities presents significant challenges. To address these challenges, we propose the real-time segmentation network (RTSeg-Net), a groundbreaking lightweight deep learning model that incorporates innovative distribution shifting convolutional blocks, tokenized multilayer perceptron blocks, and efficient feature fusion blocks. Designed for optimal computational efficiency, RTSeg-Net minimizes resource demand while significantly enhancing segmentation performance. Our comprehensive evaluation on two distinct intrapartum ultrasound image datasets reveals that RTSeg-Net achieves segmentation accuracy on par with more complex state-of-the-art networks, utilizing merely 1.86 M parameters-just 6 % of their hyperparameters-and operating seven times faster, achieving a remarkable rate of 31.13 frames per second on a Jetson Nano, a device known for its limited computing capacity. These achievements underscore RTSeg-Net's potential to provide accurate, real-time segmentation on low-power devices, broadening the scope for its application across various stages of labor. By facilitating real-time, accurate ultrasound image analysis on portable, low-cost devices, RTSeg-Net promises to revolutionize intrapartum monitoring, making sophisticated diagnostic tools accessible to a wider range of healthcare settings.
Collapse
Affiliation(s)
- Zhanhong Ou
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China
| | - Jieyun Bai
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China; Auckland Bioengineering Institute, University of Auckland, Auckland, 1010, New Zealand.
| | - Zhide Chen
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China
| | - Yaosheng Lu
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China
| | - Huijin Wang
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China
| | - Shun Long
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China
| | - Gaowen Chen
- Obstetrics and Gynecology Center, Zhujiang Hospital, Southern Medical University, Guangzhou, 510280, China
| |
Collapse
|
3
|
Bai J, Lu Y, Liu H, He F, Guo X. Editorial: New technologies improve maternal and newborn safety. FRONTIERS IN MEDICAL TECHNOLOGY 2024; 6:1372358. [PMID: 38872737 PMCID: PMC11169838 DOI: 10.3389/fmedt.2024.1372358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Accepted: 05/17/2024] [Indexed: 06/15/2024] Open
Affiliation(s)
- Jieyun Bai
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, China
- Auckland Bioengineering Institute, University of Auckland, Auckland, New Zealand
| | - Yaosheng Lu
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, China
| | - Huishu Liu
- Guangzhou Women and Children’s Medical Center, Guangzhou Medical University, Guangzhou, China
| | - Fang He
- Department of Obstetrics and Gynecology, Third Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Xiaohui Guo
- Department of Obstetrics, Shenzhen People’s Hospital, Shenzhen, China
| |
Collapse
|
4
|
Chen G, Bai J, Ou Z, Lu Y, Wang H. PSFHS: Intrapartum ultrasound image dataset for AI-based segmentation of pubic symphysis and fetal head. Sci Data 2024; 11:436. [PMID: 38698003 PMCID: PMC11066050 DOI: 10.1038/s41597-024-03266-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Accepted: 04/16/2024] [Indexed: 05/05/2024] Open
Abstract
During the process of labor, the intrapartum transperineal ultrasound examination serves as a valuable tool, allowing direct observation of the relative positional relationship between the pubic symphysis and fetal head (PSFH). Accurate assessment of fetal head descent and the prediction of the most suitable mode of delivery heavily rely on this relationship. However, achieving an objective and quantitative interpretation of the ultrasound images necessitates precise PSFH segmentation (PSFHS), a task that is both time-consuming and demanding. Integrating the potential of artificial intelligence (AI) in the field of medical ultrasound image segmentation, the development and evaluation of AI-based models rely significantly on access to comprehensive and meticulously annotated datasets. Unfortunately, publicly accessible datasets tailored for PSFHS are notably scarce. Bridging this critical gap, we introduce a PSFHS dataset comprising 1358 images, meticulously annotated at the pixel level. The annotation process adhered to standardized protocols and involved collaboration among medical experts. Remarkably, this dataset stands as the most expansive and comprehensive resource for PSFHS to date.
Collapse
Affiliation(s)
- Gaowen Chen
- Obstetrics and Gynecology Center, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Jieyun Bai
- Department of Electronic Engineering, College of Information Science and Technology, Jinan University, Guangzhou, China.
- Auckland Bioengineering Institute, the University of Auckland, Auckland, New Zealand.
| | - Zhanhong Ou
- Department of Electronic Engineering, College of Information Science and Technology, Jinan University, Guangzhou, China
| | - Yaosheng Lu
- Department of Electronic Engineering, College of Information Science and Technology, Jinan University, Guangzhou, China.
| | - Huijin Wang
- Department of Electronic Engineering, College of Information Science and Technology, Jinan University, Guangzhou, China
| |
Collapse
|
5
|
Calhoun BC, Uselman H, Olle EW. Development of Artificial Intelligence Image Classification Models for Determination of Umbilical Cord Vascular Anomalies. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2024; 43:881-897. [PMID: 38279605 DOI: 10.1002/jum.16418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Revised: 01/05/2024] [Accepted: 01/07/2024] [Indexed: 01/28/2024]
Abstract
OBJECTIVE The goal of this work was to develop robust techniques for the processing and identification of SUA using artificial intelligence (AI) image classification models. METHODS Ultrasound images obtained retrospectively were analyzed for blinding, text removal, AI training, and image prediction. After developing and testing text removal methods, a small n-size study (40 images) using fastai/PyTorch to classify umbilical cord images. This data set was expanded to 286 lateral-CFI images that were used to compare: different neural network performance, diagnostic value, and model predictions. RESULTS AI-Optical Character Recognition method was superior in its ability to remove text from images. The small n-size mixed single umbilical artery determination data set was tested with a pretrained ResNet34 neural network and obtained and error rate average of 0.083 (n = 3). The expanded data set was then tested with several AI models. The majority of the tested networks were able to obtain an average error rate of <0.15 with minimal modifications. The ResNet34-default performed the best with: an image-classification error rate of 0.0175, sensitivity of 1.00, specificity of 0.97, and ability to correctly infer classification. CONCLUSION This work provides a robust framework for ultrasound image AI classifications. AI could successfully classify umbilical cord types of ultrasound image study with excellent diagnostic value. Together this study provides a reproducible framework to develop AI-specific ultrasound classification of umbilical cord or other diagnoses to be used in conjunction with physicians for optimal patient care.
Collapse
Affiliation(s)
- Byron C Calhoun
- Department of Obstetrics and Gynecology, WVU School of Medicine, Charleston Division, Charleston, West Virginia, USA
- Maternal-Fetal Medicine, WVU School of Medicine, Charleston Division, Charleston, West Virginia, USA
| | - Heather Uselman
- Resident Department of Obstetrics and Gynecology, Charleston Area Medical Center, Charleston, West Virginia, USA
| | - Eric W Olle
- Research and Development, SynXBio Inc., Charleston, West Virginia, USA
| |
Collapse
|
6
|
Taksoee-Vester CA, Mikolaj K, Bashir Z, Christensen AN, Petersen OB, Sundberg K, Feragen A, Svendsen MBS, Nielsen M, Tolsgaard MG. AI supported fetal echocardiography with quality assessment. Sci Rep 2024; 14:5809. [PMID: 38461322 PMCID: PMC10925034 DOI: 10.1038/s41598-024-56476-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Accepted: 03/06/2024] [Indexed: 03/11/2024] Open
Abstract
This study aimed to develop a deep learning model to assess the quality of fetal echocardiography and to perform prospective clinical validation. The model was trained on data from the 18-22-week anomaly scan conducted in seven hospitals from 2008 to 2018. Prospective validation involved 100 patients from two hospitals. A total of 5363 images from 2551 pregnancies were used for training and validation. The model's segmentation accuracy depended on image quality measured by a quality score (QS). It achieved an overall average accuracy of 0.91 (SD 0.09) across the test set, with images having above-average QS scoring 0.97 (SD 0.03). During prospective validation of 192 images, clinicians rated 44.8% (SD 9.8) of images as equal in quality, 18.69% (SD 5.7) favoring auto-captured images and 36.51% (SD 9.0) preferring manually captured ones. Images with above average QS showed better agreement on segmentations (p < 0.001) and QS (p < 0.001) with fetal medicine experts. Auto-capture saved additional planes beyond protocol requirements, resulting in more comprehensive echocardiographies. Low QS had adverse effect on both model performance and clinician's agreement with model feedback. The findings highlight the importance of developing and evaluating AI models based on 'noisy' real-life data rather than pursuing the highest accuracy possible with retrospective academic-grade data.
Collapse
Affiliation(s)
- Caroline A Taksoee-Vester
- Department of Clinical Medicine, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark.
- Center of Fetal Medicine, Department of Obstetrics, Copenhagen University Hospital, Rigshospitalet, Blegdamsvej 9, Dept. 4071, 2100, Copenhagen, Denmark.
- Copenhagen Academy of Medical Education and Simulation (CAMES), Rigshospitalet, Copenhagen, Denmark.
| | - Kamil Mikolaj
- DTU Compute, Technical University of Denmark (DTU), Lyngby, Denmark
| | - Zahra Bashir
- Department of Clinical Medicine, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
- Copenhagen Academy of Medical Education and Simulation (CAMES), Rigshospitalet, Copenhagen, Denmark
- Center for Fetal Medicine, Department of Obstetrics, Slagelse Hospital, Slagelse, Denmark
| | | | - Olav B Petersen
- Department of Clinical Medicine, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
- Center of Fetal Medicine, Department of Obstetrics, Copenhagen University Hospital, Rigshospitalet, Blegdamsvej 9, Dept. 4071, 2100, Copenhagen, Denmark
| | - Karin Sundberg
- Center of Fetal Medicine, Department of Obstetrics, Copenhagen University Hospital, Rigshospitalet, Blegdamsvej 9, Dept. 4071, 2100, Copenhagen, Denmark
| | - Aasa Feragen
- DTU Compute, Technical University of Denmark (DTU), Lyngby, Denmark
| | - Morten B S Svendsen
- Copenhagen Academy of Medical Education and Simulation (CAMES), Rigshospitalet, Copenhagen, Denmark
| | - Mads Nielsen
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Martin G Tolsgaard
- Department of Clinical Medicine, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
- Center of Fetal Medicine, Department of Obstetrics, Copenhagen University Hospital, Rigshospitalet, Blegdamsvej 9, Dept. 4071, 2100, Copenhagen, Denmark
- Copenhagen Academy of Medical Education and Simulation (CAMES), Rigshospitalet, Copenhagen, Denmark
| |
Collapse
|
7
|
Sivera R, Clark AE, Dall'Asta A, Ghi T, Schievano S, Lees CC. Fetal face shape analysis from prenatal 3D ultrasound images. Sci Rep 2024; 14:4411. [PMID: 38388522 PMCID: PMC10884000 DOI: 10.1038/s41598-023-50386-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Accepted: 12/19/2023] [Indexed: 02/24/2024] Open
Abstract
3D ultrasound imaging of fetal faces has been predominantly confined to qualitative assessment. Many genetic conditions evade diagnosis and identification could assist with parental counselling, pregnancy management and neonatal care planning. We describe a methodology to build a shape model of the third trimester fetal face from 3D ultrasound and show how it can objectively describe morphological features and gestational-age related changes of normal fetal faces. 135 fetal face 3D ultrasound volumes (117 appropriately grown, 18 growth-restricted) of 24-34 weeks gestation were included. A 3D surface model of each face was obtained using a semi-automatic segmentation workflow. Size normalisation and rescaling was performed using a growth model giving the average size at every gestation. The model demonstrated a similar growth rate to standard head circumference reference charts. A landmark-free morphometry model was estimated to characterize shape differences using non-linear deformations of an idealized template face. Advancing gestation is associated with widening/fullness of the cheeks, contraction of the chin and deepening of the eyes. Fetal growth restriction is associated with a smaller average facial size but no morphological differences. This model may eventually be used as a reference to assist in the prenatal diagnosis of congenital anomalies with characteristic facial dysmorphisms.
Collapse
Affiliation(s)
- Raphael Sivera
- Institute of Cardiovascular Science, University College London, London, UK
| | - Anna E Clark
- Institute of Reproductive and Development Biology, Department of Metabolism, Digestion and Reproduction, Imperial College London, London, UK
| | - Andrea Dall'Asta
- Department of Medicine and Surgery, University of Parma, Parma, Italy
| | - Tullio Ghi
- Department of Medicine and Surgery, University of Parma, Parma, Italy
| | - Silvia Schievano
- Institute of Cardiovascular Science, University College London, London, UK
| | - Christoph C Lees
- Institute of Reproductive and Development Biology, Department of Metabolism, Digestion and Reproduction, Imperial College London, London, UK.
| |
Collapse
|
8
|
Lonsdale H, Gray GM, Ahumada LM, Matava CT. Machine Vision and Image Analysis in Anesthesia: Narrative Review and Future Prospects. Anesth Analg 2023; 137:830-840. [PMID: 37712476 DOI: 10.1213/ane.0000000000006679] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/16/2023]
Abstract
Machine vision describes the use of artificial intelligence to interpret, analyze, and derive predictions from image or video data. Machine vision-based techniques are already in clinical use in radiology, ophthalmology, and dermatology, where some applications currently equal or exceed the performance of specialty physicians in areas of image interpretation. While machine vision in anesthesia has many potential applications, its development remains in its infancy in our specialty. Early research for machine vision in anesthesia has focused on automated recognition of anatomical structures during ultrasound-guided regional anesthesia or line insertion; recognition of the glottic opening and vocal cords during video laryngoscopy; prediction of the difficult airway using facial images; and clinical alerts for endobronchial intubation detected on chest radiograph. Current machine vision applications measuring the distance between endotracheal tube tip and carina have demonstrated noninferior performance compared to board-certified physicians. The performance and potential uses of machine vision for anesthesia will only grow with the advancement of underlying machine vision algorithm technical performance developed outside of medicine, such as convolutional neural networks and transfer learning. This article summarizes recently published works of interest, provides a brief overview of techniques used to create machine vision applications, explains frequently used terms, and discusses challenges the specialty will encounter as we embrace the advantages that this technology may bring to future clinical practice and patient care. As machine vision emerges onto the clinical stage, it is critically important that anesthesiologists are prepared to confidently assess which of these devices are safe, appropriate, and bring added value to patient care.
Collapse
Affiliation(s)
- Hannah Lonsdale
- From the Division of Pediatric Anesthesiology, Department of Anesthesiology, Vanderbilt University Medical Center, Nashville, Tennessee
| | - Geoffrey M Gray
- Center for Pediatric Data Science and Analytics Methodology, Johns Hopkins All Children's Hospital, St Petersburg, Florida
| | - Luis M Ahumada
- Center for Pediatric Data Science and Analytics Methodology, Johns Hopkins All Children's Hospital, St Petersburg, Florida
| | - Clyde T Matava
- Department of Anesthesia and Pain Medicine, The Hospital for Sick Children, Toronto, Ontario, Canada
- Department of Anesthesiology and Pain Medicine, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
9
|
Werner H, Santos IF, Giraldi GA, Lopes J, Ribeiro G, Lopes FP. Fetal magnetic resonance imaging artifacts: role of deep learning to improve imaging. ULTRASOUND IN OBSTETRICS & GYNECOLOGY : THE OFFICIAL JOURNAL OF THE INTERNATIONAL SOCIETY OF ULTRASOUND IN OBSTETRICS AND GYNECOLOGY 2023; 62:302-303. [PMID: 36840982 DOI: 10.1002/uog.26185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Revised: 02/08/2023] [Accepted: 02/15/2023] [Indexed: 06/18/2023]
Affiliation(s)
- H Werner
- Instituto de Ensino e Pesquisa, Dasa (IEPD), Brazil
- BiodesignLab Dasa/PUC-Rio, Rio de Janeiro, Brazil
| | - I Félix Santos
- Laboratório Nacional de Computação Científica, Petrópolis, Rio de Janeiro, Brazil
| | - G A Giraldi
- Laboratório Nacional de Computação Científica, Petrópolis, Rio de Janeiro, Brazil
| | - J Lopes
- BiodesignLab Dasa/PUC-Rio, Rio de Janeiro, Brazil
| | - G Ribeiro
- BiodesignLab Dasa/PUC-Rio, Rio de Janeiro, Brazil
| | - F P Lopes
- Instituto de Ensino e Pesquisa, Dasa (IEPD), Brazil
- BiodesignLab Dasa/PUC-Rio, Rio de Janeiro, Brazil
| |
Collapse
|