1
|
Jost E, Kosian P, Jimenez Cruz J, Albarqouni S, Gembruch U, Strizek B, Recker F. Evolving the Era of 5D Ultrasound? A Systematic Literature Review on the Applications for Artificial Intelligence Ultrasound Imaging in Obstetrics and Gynecology. J Clin Med 2023; 12:6833. [PMID: 37959298 PMCID: PMC10649694 DOI: 10.3390/jcm12216833] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 10/17/2023] [Accepted: 10/25/2023] [Indexed: 11/15/2023] Open
Abstract
Artificial intelligence (AI) has gained prominence in medical imaging, particularly in obstetrics and gynecology (OB/GYN), where ultrasound (US) is the preferred method. It is considered cost effective and easily accessible but is time consuming and hindered by the need for specialized training. To overcome these limitations, AI models have been proposed for automated plane acquisition, anatomical measurements, and pathology detection. This study aims to overview recent literature on AI applications in OB/GYN US imaging, highlighting their benefits and limitations. For the methodology, a systematic literature search was performed in the PubMed and Cochrane Library databases. Matching abstracts were screened based on the PICOS (Participants, Intervention or Exposure, Comparison, Outcome, Study type) scheme. Articles with full text copies were distributed to the sections of OB/GYN and their research topics. As a result, this review includes 189 articles published from 1994 to 2023. Among these, 148 focus on obstetrics and 41 on gynecology. AI-assisted US applications span fetal biometry, echocardiography, or neurosonography, as well as the identification of adnexal and breast masses, and assessment of the endometrium and pelvic floor. To conclude, the applications for AI-assisted US in OB/GYN are abundant, especially in the subspecialty of obstetrics. However, while most studies focus on common application fields such as fetal biometry, this review outlines emerging and still experimental fields to promote further research.
Collapse
Affiliation(s)
- Elena Jost
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Philipp Kosian
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Jorge Jimenez Cruz
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Shadi Albarqouni
- Department of Diagnostic and Interventional Radiology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
- Helmholtz AI, Helmholtz Munich, Ingolstädter Landstraße 1, 85764 Neuherberg, Germany
| | - Ulrich Gembruch
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Brigitte Strizek
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Florian Recker
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| |
Collapse
|
2
|
Yang Y, Wu B, Wu H, Xu W, Lyu G, Liu P, He S. Classification of normal and abnormal fetal heart ultrasound images and identification of ventricular septal defects based on deep learning. J Perinat Med 2023; 51:1052-1058. [PMID: 37161929 DOI: 10.1515/jpm-2023-0041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 04/19/2023] [Indexed: 05/11/2023]
Abstract
OBJECTIVES Congenital heart defects (CHDs) are the most common birth defects. Recently, artificial intelligence (AI) was used to assist in CHD diagnosis. No comparison has been made among the various types of algorithms that can assist in the prenatal diagnosis. METHODS Normal and abnormal fetal ultrasound heart images, including five standard views, were collected according to the International Society of Ultrasound in Obstetrics and Gynecology (ISUOG) Practice guidelines. You Only Look Once version 5 (YOLOv5) models were trained and tested. An excellent model was screened out after comparing YOLOv5 with other classic detection methods. RESULTS On the training set, YOLOv5n performed slightly better than the others. On the validation set, YOLOv5n attained the highest overall accuracy (90.67 %). On the CHD test set, YOLOv5n, which only needed 0.007 s to recognize each image, had the highest overall accuracy (82.93 %), and YOLOv5l achieved the best accuracy on the abnormal dataset (71.93 %). On the VSD test set, YOLOv5l had the best performance, with a 92.79 % overall accuracy rate and 92.59 % accuracy on the abnormal dataset. The YOLOv5 models achieved better performance than the Fast region-based convolutional neural network (RCNN) & ResNet50 model and the Fast RCNN & MobileNetv2 model on the CHD test set (p<0.05) and VSD test set (p<0.01). CONCLUSIONS YOLOv5 models are able to accurately distinguish normal and abnormal fetal heart ultrasound images, especially with respect to the identification of VSD, which have the potential to assist ultrasound in prenatal diagnosis.
Collapse
Affiliation(s)
- Yiru Yang
- The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian, P.R. China
| | - Bingzheng Wu
- College of Engineering, Huaqiao University, Quanzhou, Fujian, P.R. China
| | - Huiling Wu
- The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian, P.R. China
| | - Wu Xu
- The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian, P.R. China
| | - Guorong Lyu
- The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian, P.R. China
- Collaborative Innovation Center for Maternal and Infant Health Service Application Technology, Quanzhou Medical College, Quanzhou, Fujian, P.R. China
| | - Peizhong Liu
- College of Engineering, Huaqiao University, Quanzhou, Fujian, P.R. China
| | - Shaozheng He
- The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian, P.R. China
| |
Collapse
|
3
|
Ramirez Zegarra R, Ghi T. Use of artificial intelligence and deep learning in fetal ultrasound imaging. ULTRASOUND IN OBSTETRICS & GYNECOLOGY : THE OFFICIAL JOURNAL OF THE INTERNATIONAL SOCIETY OF ULTRASOUND IN OBSTETRICS AND GYNECOLOGY 2023; 62:185-194. [PMID: 36436205 DOI: 10.1002/uog.26130] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 11/06/2022] [Accepted: 11/21/2022] [Indexed: 06/16/2023]
Abstract
Deep learning is considered the leading artificial intelligence tool in image analysis in general. Deep-learning algorithms excel at image recognition, which makes them valuable in medical imaging. Obstetric ultrasound has become the gold standard imaging modality for detection and diagnosis of fetal malformations. However, ultrasound relies heavily on the operator's experience, making it unreliable in inexperienced hands. Several studies have proposed the use of deep-learning models as a tool to support sonographers, in an attempt to overcome these problems inherent to ultrasound. Deep learning has many clinical applications in the field of fetal imaging, including identification of normal and abnormal fetal anatomy and measurement of fetal biometry. In this Review, we provide a comprehensive explanation of the fundamentals of deep learning in fetal imaging, with particular focus on its clinical applicability. © 2022 International Society of Ultrasound in Obstetrics and Gynecology.
Collapse
Affiliation(s)
- R Ramirez Zegarra
- Department of Medicine and Surgery, Obstetrics and Gynecology Unit, University of Parma, Parma, Italy
| | - T Ghi
- Department of Medicine and Surgery, Obstetrics and Gynecology Unit, University of Parma, Parma, Italy
| |
Collapse
|
4
|
Fiorentino MC, Villani FP, Di Cosmo M, Frontoni E, Moccia S. A review on deep-learning algorithms for fetal ultrasound-image analysis. Med Image Anal 2023; 83:102629. [PMID: 36308861 DOI: 10.1016/j.media.2022.102629] [Citation(s) in RCA: 29] [Impact Index Per Article: 29.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 07/12/2022] [Accepted: 09/10/2022] [Indexed: 11/07/2022]
Abstract
Deep-learning (DL) algorithms are becoming the standard for processing ultrasound (US) fetal images. A number of survey papers in the field is today available, but most of them are focusing on a broader area of medical-image analysis or not covering all fetal US DL applications. This paper surveys the most recent work in the field, with a total of 153 research papers published after 2017. Papers are analyzed and commented from both the methodology and the application perspective. We categorized the papers into (i) fetal standard-plane detection, (ii) anatomical structure analysis and (iii) biometry parameter estimation. For each category, main limitations and open issues are presented. Summary tables are included to facilitate the comparison among the different approaches. In addition, emerging applications are also outlined. Publicly-available datasets and performance metrics commonly used to assess algorithm performance are summarized, too. This paper ends with a critical summary of the current state of the art on DL algorithms for fetal US image analysis and a discussion on current challenges that have to be tackled by researchers working in the field to translate the research methodology into actual clinical practice.
Collapse
Affiliation(s)
| | | | - Mariachiara Di Cosmo
- Department of Information Engineering, Università Politecnica delle Marche, Italy
| | - Emanuele Frontoni
- Department of Information Engineering, Università Politecnica delle Marche, Italy; Department of Political Sciences, Communication and International Relations, Università degli Studi di Macerata, Italy
| | - Sara Moccia
- The BioRobotics Institute and Department of Excellence in Robotics & AI, Scuola Superiore Sant'Anna, Italy
| |
Collapse
|
5
|
Alzubaidi M, Agus M, Alyafei K, Althelaya KA, Shah U, Abd-Alrazaq AA, Anbar M, Makhlouf M, Househ M. Towards deep observation: A systematic survey on artificial intelligence techniques to monitor fetus via Ultrasound Images. iScience 2022; 25:104713. [PMID: 35856024 PMCID: PMC9287600 DOI: 10.1016/j.isci.2022.104713] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 06/09/2022] [Accepted: 06/28/2022] [Indexed: 11/26/2022] Open
Abstract
Several reviews have been conducted regarding artificial intelligence (AI) techniques to improve pregnancy outcomes. But they are not focusing on ultrasound images. This survey aims to explore how AI can assist with fetal growth monitoring via ultrasound image. We reported our findings using the guidelines for PRISMA. We conducted a comprehensive search of eight bibliographic databases. Out of 1269 studies 107 are included. We found that 2D ultrasound images were more popular (88) than 3D and 4D ultrasound images (19). Classification is the most used method (42), followed by segmentation (31), classification integrated with segmentation (16) and other miscellaneous methods such as object-detection, regression, and reinforcement learning (18). The most common areas that gained traction within the pregnancy domain were the fetus head (43), fetus body (31), fetus heart (13), fetus abdomen (10), and the fetus face (10). This survey will promote the development of improved AI models for fetal clinical applications. Artificial intelligence studies to monitor fetal development via ultrasound images Fetal issues categorized based on four categories — general, head, heart, face, abdomen The most used AI techniques are classification, segmentation, object detection, and RL The research and practical implications are included.
Collapse
|
6
|
Li JW, Cao YC, Zhao ZJ, Shi ZT, Duan XQ, Chang C, Chen JG. Prediction for pathological and immunohistochemical characteristics of triple-negative invasive breast carcinomas: the performance comparison between quantitative and qualitative sonographic feature analysis. Eur Radiol 2022; 32:1590-1600. [PMID: 34519862 DOI: 10.1007/s00330-021-08224-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2021] [Revised: 06/28/2021] [Accepted: 07/15/2021] [Indexed: 12/01/2022]
Abstract
OBJECTIVE Sonographic features are associated with pathological and immunohistochemical characteristics of triple-negative breast cancer (TNBC). To predict the biological property of TNBC, the performance using quantitative high-throughput sonographic feature analysis was compared with that using qualitative feature assessment. METHODS We retrospectively reviewed ultrasound images, clinical, pathological, and immunohistochemical (IHC) data of 252 female TNBC patients. All patients were subgrouped according to the histological grade, Ki67 expression level, and human epidermal growth factor receptor 2 (HER2) score. Qualitative sonographic feature assessment included shape, margin, posterior acoustic pattern, and calcification referring to the Breast Imaging Reporting and Data System (BI-RADS). Quantitative sonographic features were acquired based on the computer-aided radiomics analysis. Breast cancer masses were manually segmented from the surrounding breast tissues. For each ultrasound image, 1688 radiomics features of 7 feature classes were extracted. The principal component analysis (PCA), least absolute shrinkage and selection operator (LASSO), and support vector machine (SVM) were used to determine the high-throughput radiomics features that were highly correlated to biological properties. The performance using both quantitative and qualitative sonographic features to predict biological properties of TNBC was represented by the area under the receiver operating characteristic curve (AUC). RESULTS In the qualitative assessment, regular tumor shape, no angular or spiculated margin, posterior acoustic enhancement, and no calcification were used as the independent sonographic features for TNBC. Using the combination of these four features to predict the histological grade, Ki67, HER2, axillary lymph node metastasis (ALNM), and lymphovascular invasion (LVI), the AUC was 0.673, 0.680, 0.651, 0.587, and 0.566, respectively. The number of high-throughput features that closely correlated with biological properties was 34 for histological grade (AUC 0.942), 27 for Ki67 (AUC 0.732), 25 for HER2 (AUC 0.730), 34 for ALNM (AUC 0.804), and 34 for LVI (AUC 0.795). CONCLUSION High-throughput quantitative sonographic features are superior to traditional qualitative ultrasound features in predicting the biological behavior of TNBC. KEY POINTS • Sonographic appearances of TNBCs showed a great variety in accordance with its biological and clinical characteristics. • Both qualitative and quantitative sonographic features of TNBCs are associated with tumor biological characteristics. • The quantitative high-throughput feature analysis is superior to two-dimensional sonographic feature assessment in predicting tumor biological property.
Collapse
Affiliation(s)
- Jia-Wei Li
- Department of Medical Ultrasound, Fudan University Shanghai Cancer Center, No 270, Dong'an Road, Xuhui District, Shanghai, 200032, China
- Department of Oncology, Shanghai Medical College, Fudan University, No 270, Dong'an Road, Xuhui District, Shanghai, 200032, China
| | - Yu-Cheng Cao
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, #500 Dongchuan Rd., Shanghai, 200241, China
| | - Zhi-Jin Zhao
- Department of Medical Ultrasound, Fudan University Shanghai Cancer Center, No 270, Dong'an Road, Xuhui District, Shanghai, 200032, China
- Department of Oncology, Shanghai Medical College, Fudan University, No 270, Dong'an Road, Xuhui District, Shanghai, 200032, China
| | - Zhao-Ting Shi
- Department of Medical Ultrasound, Fudan University Shanghai Cancer Center, No 270, Dong'an Road, Xuhui District, Shanghai, 200032, China
- Department of Oncology, Shanghai Medical College, Fudan University, No 270, Dong'an Road, Xuhui District, Shanghai, 200032, China
| | - Xiao-Qian Duan
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, #500 Dongchuan Rd., Shanghai, 200241, China
| | - Cai Chang
- Department of Medical Ultrasound, Fudan University Shanghai Cancer Center, No 270, Dong'an Road, Xuhui District, Shanghai, 200032, China.
- Department of Oncology, Shanghai Medical College, Fudan University, No 270, Dong'an Road, Xuhui District, Shanghai, 200032, China.
| | - Jian-Gang Chen
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, #500 Dongchuan Rd., Shanghai, 200241, China.
| |
Collapse
|
7
|
Wang J, Zhu H, Wang SH, Zhang YD. A Review of Deep Learning on Medical Image Analysis. MOBILE NETWORKS AND APPLICATIONS 2021; 26:351-380. [DOI: 10.1007/s11036-020-01672-7] [Citation(s) in RCA: 51] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 10/20/2020] [Indexed: 08/30/2023]
|