1
|
Weichert J, Scharf JL. Advancements in Artificial Intelligence for Fetal Neurosonography: A Comprehensive Review. J Clin Med 2024; 13:5626. [PMID: 39337113 PMCID: PMC11432922 DOI: 10.3390/jcm13185626] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2024] [Revised: 09/04/2024] [Accepted: 09/16/2024] [Indexed: 09/30/2024] Open
Abstract
The detailed sonographic assessment of the fetal neuroanatomy plays a crucial role in prenatal diagnosis, providing valuable insights into timely, well-coordinated fetal brain development and detecting even subtle anomalies that may impact neurodevelopmental outcomes. With recent advancements in artificial intelligence (AI) in general and medical imaging in particular, there has been growing interest in leveraging AI techniques to enhance the accuracy, efficiency, and clinical utility of fetal neurosonography. The paramount objective of this focusing review is to discuss the latest developments in AI applications in this field, focusing on image analysis, the automation of measurements, prediction models of neurodevelopmental outcomes, visualization techniques, and their integration into clinical routine.
Collapse
Affiliation(s)
- Jan Weichert
- Division of Prenatal Medicine, Department of Gynecology and Obstetrics, University Hospital of Schleswig-Holstein, Ratzeburger Allee 160, 23538 Luebeck, Germany;
- Elbe Center of Prenatal Medicine and Human Genetics, Willy-Brandt-Str. 1, 20457 Hamburg, Germany
| | - Jann Lennard Scharf
- Division of Prenatal Medicine, Department of Gynecology and Obstetrics, University Hospital of Schleswig-Holstein, Ratzeburger Allee 160, 23538 Luebeck, Germany;
| |
Collapse
|
2
|
Liu W, Zhang B, Liu T, Jiang J, Liu Y. Artificial Intelligence in Pancreatic Image Analysis: A Review. SENSORS (BASEL, SWITZERLAND) 2024; 24:4749. [PMID: 39066145 PMCID: PMC11280964 DOI: 10.3390/s24144749] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/05/2024] [Revised: 07/15/2024] [Accepted: 07/16/2024] [Indexed: 07/28/2024]
Abstract
Pancreatic cancer is a highly lethal disease with a poor prognosis. Its early diagnosis and accurate treatment mainly rely on medical imaging, so accurate medical image analysis is especially vital for pancreatic cancer patients. However, medical image analysis of pancreatic cancer is facing challenges due to ambiguous symptoms, high misdiagnosis rates, and significant financial costs. Artificial intelligence (AI) offers a promising solution by relieving medical personnel's workload, improving clinical decision-making, and reducing patient costs. This study focuses on AI applications such as segmentation, classification, object detection, and prognosis prediction across five types of medical imaging: CT, MRI, EUS, PET, and pathological images, as well as integrating these imaging modalities to boost diagnostic accuracy and treatment efficiency. In addition, this study discusses current hot topics and future directions aimed at overcoming the challenges in AI-enabled automated pancreatic cancer diagnosis algorithms.
Collapse
Affiliation(s)
- Weixuan Liu
- Sydney Smart Technology College, Northeastern University at Qinhuangdao, Qinhuangdao 066004, China; (W.L.); (B.Z.)
| | - Bairui Zhang
- Sydney Smart Technology College, Northeastern University at Qinhuangdao, Qinhuangdao 066004, China; (W.L.); (B.Z.)
| | - Tao Liu
- School of Mathematics and Statistics, Northeastern University at Qinhuangdao, Qinhuangdao 066004, China;
| | - Juntao Jiang
- College of Control Science and Engineering, Zhejiang University, Hangzhou 310058, China
| | - Yong Liu
- College of Control Science and Engineering, Zhejiang University, Hangzhou 310058, China
| |
Collapse
|
3
|
Alzubaidi M, Shah U, Agus M, Househ M. FetSAM: Advanced Segmentation Techniques for Fetal Head Biometrics in Ultrasound Imagery. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2024; 5:281-295. [PMID: 38766538 PMCID: PMC11100952 DOI: 10.1109/ojemb.2024.3382487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 03/05/2024] [Accepted: 03/24/2024] [Indexed: 05/22/2024] Open
Abstract
Goal: FetSAM represents a cutting-edge deep learning model aimed at revolutionizing fetal head ultrasound segmentation, thereby elevating prenatal diagnostic precision. Methods: Utilizing a comprehensive dataset-the largest to date for fetal head metrics-FetSAM incorporates prompt-based learning. It distinguishes itself with a dual loss mechanism, combining Weighted DiceLoss and Weighted Lovasz Loss, optimized through AdamW and underscored by class weight adjustments for better segmentation balance. Performance benchmarks against prominent models such as U-Net, DeepLabV3, and Segformer highlight its efficacy. Results: FetSAM delivers unparalleled segmentation accuracy, demonstrated by a DSC of 0.90117, HD of 1.86484, and ASD of 0.46645. Conclusion: FetSAM sets a new benchmark in AI-enhanced prenatal ultrasound analysis, providing a robust, precise tool for clinical applications and pushing the envelope of prenatal care with its groundbreaking dataset and segmentation capabilities.
Collapse
Affiliation(s)
- Mahmood Alzubaidi
- College of Science and EngineeringHamad Bin Khalifa UniversityDoha34110Qatar
| | - Uzair Shah
- College of Science and EngineeringHamad Bin Khalifa UniversityDoha34110Qatar
| | - Marco Agus
- College of Science and EngineeringHamad Bin Khalifa UniversityDoha34110Qatar
| | - Mowafa Househ
- College of Science and EngineeringHamad Bin Khalifa UniversityDoha34110Qatar
| |
Collapse
|
4
|
Jost E, Kosian P, Jimenez Cruz J, Albarqouni S, Gembruch U, Strizek B, Recker F. Evolving the Era of 5D Ultrasound? A Systematic Literature Review on the Applications for Artificial Intelligence Ultrasound Imaging in Obstetrics and Gynecology. J Clin Med 2023; 12:6833. [PMID: 37959298 PMCID: PMC10649694 DOI: 10.3390/jcm12216833] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 10/17/2023] [Accepted: 10/25/2023] [Indexed: 11/15/2023] Open
Abstract
Artificial intelligence (AI) has gained prominence in medical imaging, particularly in obstetrics and gynecology (OB/GYN), where ultrasound (US) is the preferred method. It is considered cost effective and easily accessible but is time consuming and hindered by the need for specialized training. To overcome these limitations, AI models have been proposed for automated plane acquisition, anatomical measurements, and pathology detection. This study aims to overview recent literature on AI applications in OB/GYN US imaging, highlighting their benefits and limitations. For the methodology, a systematic literature search was performed in the PubMed and Cochrane Library databases. Matching abstracts were screened based on the PICOS (Participants, Intervention or Exposure, Comparison, Outcome, Study type) scheme. Articles with full text copies were distributed to the sections of OB/GYN and their research topics. As a result, this review includes 189 articles published from 1994 to 2023. Among these, 148 focus on obstetrics and 41 on gynecology. AI-assisted US applications span fetal biometry, echocardiography, or neurosonography, as well as the identification of adnexal and breast masses, and assessment of the endometrium and pelvic floor. To conclude, the applications for AI-assisted US in OB/GYN are abundant, especially in the subspecialty of obstetrics. However, while most studies focus on common application fields such as fetal biometry, this review outlines emerging and still experimental fields to promote further research.
Collapse
Affiliation(s)
- Elena Jost
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Philipp Kosian
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Jorge Jimenez Cruz
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Shadi Albarqouni
- Department of Diagnostic and Interventional Radiology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
- Helmholtz AI, Helmholtz Munich, Ingolstädter Landstraße 1, 85764 Neuherberg, Germany
| | - Ulrich Gembruch
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Brigitte Strizek
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Florian Recker
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| |
Collapse
|
5
|
Guo J, Tan G, Wu F, Wen H, Li K. Fetal Ultrasound Standard Plane Detection With Coarse-to-Fine Multi-Task Learning. IEEE J Biomed Health Inform 2023; 27:5023-5031. [PMID: 36173776 DOI: 10.1109/jbhi.2022.3209589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The ultrasound standard plane plays an important role in prenatal fetal growth parameter measurement and disease diagnosis in prenatal screening. However, obtaining standard planes in a fetal ultrasound video is not only laborious and time-consuming but also depends on the clinical experience of sonographers to a certain extent. To improve the acquisition efficiency and accuracy of the ultrasound standard plane, we propose a novel detection framework that utilizes both the coarse-to-fine detection strategy and multi-task learning mechanism for feature-fused images. First, traditional manually-designed features and deep learning-based features are fused to obtain low-level shared features, which can enhance the model's feature expression ability. Inspired by the process of human recognition, ultrasound standard plane detection is divided into a coarse process of plane type classification and a fine process of standard-or-not detection, which is implemented via an end-to-end multi-task learning network. The region-of-interest area is also recognised in our detection framework to suppress the influence of a variable maternal background. Extensive experiments are conducted on three ultrasound planes of the first-class fetal examination, i.e., the femur, thalamus, and abdomen ultrasound images. The experiment results show that our method outperforms competing methods in terms of accuracy, which demonstrates the efficacy of the proposed method and can reduce the workload of sonographers in prenatal screening.
Collapse
|
6
|
Wang L, Yang Y, Yang A, Li T. Lightweight deep learning model incorporating an attention mechanism and feature fusion for automatic classification of gastric lesions in gastroscopic images. BIOMEDICAL OPTICS EXPRESS 2023; 14:4677-4695. [PMID: 37791283 PMCID: PMC10545198 DOI: 10.1364/boe.487456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 06/11/2023] [Accepted: 06/29/2023] [Indexed: 10/05/2023]
Abstract
Accurate diagnosis of various lesions in the formation stage of gastric cancer is an important problem for doctors. Automatic diagnosis tools based on deep learning can help doctors improve the accuracy of gastric lesion diagnosis. Most of the existing deep learning-based methods have been used to detect a limited number of lesions in the formation stage of gastric cancer, and the classification accuracy needs to be improved. To this end, this study proposed an attention mechanism feature fusion deep learning model with only 14 million (M) parameters. Based on that model, the automatic classification of a wide range of lesions covering the stage of gastric cancer formation was investigated, including non-neoplasm(including gastritis and intestinal metaplasia), low-grade intraepithelial neoplasia, and early gastric cancer (including high-grade intraepithelial neoplasia and early gastric cancer). 4455 magnification endoscopy with narrow-band imaging(ME-NBI) images from 1188 patients were collected to train and test the proposed method. The results of the test dataset showed that compared with the advanced gastric lesions classification method with the best performance (overall accuracy = 94.3%, parameters = 23.9 M), the proposed method achieved both higher overall accuracy and a relatively lightweight model (overall accuracy =95.6%, parameter = 14 M). The accuracy, sensitivity, and specificity of low-grade intraepithelial neoplasia were 94.5%, 93.0%, and 96.5%, respectively, achieving state-of-the-art classification performance. In conclusion, our method has demonstrated its potential in diagnosing various lesions at the stage of gastric cancer formation.
Collapse
Affiliation(s)
- Lingxiao Wang
- Institute of Biomedical Engineering, Chinese Academy of Medical Sciences & Peking Union Medical College, Tianjin 300192, China
| | - Yingyun Yang
- Department of Gastroenterology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing 100730, China
| | - Aiming Yang
- Department of Gastroenterology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing 100730, China
| | - Ting Li
- Institute of Biomedical Engineering, Chinese Academy of Medical Sciences & Peking Union Medical College, Tianjin 300192, China
| |
Collapse
|
7
|
Ramirez Zegarra R, Ghi T. Use of artificial intelligence and deep learning in fetal ultrasound imaging. ULTRASOUND IN OBSTETRICS & GYNECOLOGY : THE OFFICIAL JOURNAL OF THE INTERNATIONAL SOCIETY OF ULTRASOUND IN OBSTETRICS AND GYNECOLOGY 2023; 62:185-194. [PMID: 36436205 DOI: 10.1002/uog.26130] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 11/06/2022] [Accepted: 11/21/2022] [Indexed: 06/16/2023]
Abstract
Deep learning is considered the leading artificial intelligence tool in image analysis in general. Deep-learning algorithms excel at image recognition, which makes them valuable in medical imaging. Obstetric ultrasound has become the gold standard imaging modality for detection and diagnosis of fetal malformations. However, ultrasound relies heavily on the operator's experience, making it unreliable in inexperienced hands. Several studies have proposed the use of deep-learning models as a tool to support sonographers, in an attempt to overcome these problems inherent to ultrasound. Deep learning has many clinical applications in the field of fetal imaging, including identification of normal and abnormal fetal anatomy and measurement of fetal biometry. In this Review, we provide a comprehensive explanation of the fundamentals of deep learning in fetal imaging, with particular focus on its clinical applicability. © 2022 International Society of Ultrasound in Obstetrics and Gynecology.
Collapse
Affiliation(s)
- R Ramirez Zegarra
- Department of Medicine and Surgery, Obstetrics and Gynecology Unit, University of Parma, Parma, Italy
| | - T Ghi
- Department of Medicine and Surgery, Obstetrics and Gynecology Unit, University of Parma, Parma, Italy
| |
Collapse
|
8
|
Horgan R, Nehme L, Abuhamad A. Artificial intelligence in obstetric ultrasound: A scoping review. Prenat Diagn 2023; 43:1176-1219. [PMID: 37503802 DOI: 10.1002/pd.6411] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 06/05/2023] [Accepted: 07/17/2023] [Indexed: 07/29/2023]
Abstract
The objective is to summarize the current use of artificial intelligence (AI) in obstetric ultrasound. PubMed, Cochrane Library, and ClinicalTrials.gov databases were searched using the following keywords "neural networks", OR "artificial intelligence", OR "machine learning", OR "deep learning", AND "obstetrics", OR "obstetrical", OR "fetus", OR "foetus", OR "fetal", OR "foetal", OR "pregnancy", or "pregnant", AND "ultrasound" from inception through May 2022. The search was limited to the English language. Studies were eligible for inclusion if they described the use of AI in obstetric ultrasound. Obstetric ultrasound was defined as the process of obtaining ultrasound images of a fetus, amniotic fluid, or placenta. AI was defined as the use of neural networks, machine learning, or deep learning methods. The authors' search identified a total of 127 papers that fulfilled our inclusion criteria. The current uses of AI in obstetric ultrasound include first trimester pregnancy ultrasound, assessment of placenta, fetal biometry, fetal echocardiography, fetal neurosonography, assessment of fetal anatomy, and other uses including assessment of fetal lung maturity and screening for risk of adverse pregnancy outcomes. AI holds the potential to improve the ultrasound efficiency, pregnancy outcomes in low resource settings, detection of congenital malformations and prediction of adverse pregnancy outcomes.
Collapse
Affiliation(s)
- Rebecca Horgan
- Division of Maternal Fetal Medicine, Department of Obstetrics & Gynecology, Eastern Virginia Medical School, Norfolk, Virginia, USA
| | - Lea Nehme
- Division of Maternal Fetal Medicine, Department of Obstetrics & Gynecology, Eastern Virginia Medical School, Norfolk, Virginia, USA
| | - Alfred Abuhamad
- Division of Maternal Fetal Medicine, Department of Obstetrics & Gynecology, Eastern Virginia Medical School, Norfolk, Virginia, USA
| |
Collapse
|
9
|
Sarno L, Neola D, Carbone L, Saccone G, Carlea A, Miceli M, Iorio GG, Mappa I, Rizzo G, Girolamo RD, D'Antonio F, Guida M, Maruotti GM. Use of artificial intelligence in obstetrics: not quite ready for prime time. Am J Obstet Gynecol MFM 2023; 5:100792. [PMID: 36356939 DOI: 10.1016/j.ajogmf.2022.100792] [Citation(s) in RCA: 17] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Revised: 10/18/2022] [Accepted: 10/28/2022] [Indexed: 11/09/2022]
Abstract
Artificial intelligence is finding several applications in healthcare settings. This study aimed to report evidence on the effectiveness of artificial intelligence application in obstetrics. Through a narrative review of literature, we described artificial intelligence use in different obstetrical areas as follows: prenatal diagnosis, fetal heart monitoring, prediction and management of pregnancy-related complications (preeclampsia, preterm birth, gestational diabetes mellitus, and placenta accreta spectrum), and labor. Artificial intelligence seems to be a promising tool to help clinicians in daily clinical activity. The main advantages that emerged from this review are related to the reduction of inter- and intraoperator variability, time reduction of procedures, and improvement of overall diagnostic performance. However, nowadays, the diffusion of these systems in routine clinical practice raises several issues. Reported evidence is still very limited, and further studies are needed to confirm the clinical applicability of artificial intelligence. Moreover, better training of clinicians designed to use these systems should be ensured, and evidence-based guidelines regarding this topic should be produced to enhance the strengths of artificial systems and minimize their limits.
Collapse
Affiliation(s)
- Laura Sarno
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Daniele Neola
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida).
| | - Luigi Carbone
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Gabriele Saccone
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Annunziata Carlea
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Marco Miceli
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida); CEINGE Biotecnologie Avanzate, Naples, Italy (Dr Miceli)
| | - Giuseppe Gabriele Iorio
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Ilenia Mappa
- Division of Maternal Fetal Medicine, Department of Obstetrics and Gynecology, University of Rome Tor Vergata, Rome, Italy (Dr Mappa and Dr Rizzo)
| | - Giuseppe Rizzo
- Division of Maternal Fetal Medicine, Department of Obstetrics and Gynecology, University of Rome Tor Vergata, Rome, Italy (Dr Mappa and Dr Rizzo)
| | - Raffaella Di Girolamo
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Francesco D'Antonio
- Center for Fetal Care and High Risk Pregnancy, Department of Obstetrics and Gynecology, University G. D'Annunzio of Chieti-Pescara, Chieti, Italy (Dr D'Antonio)
| | - Maurizio Guida
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Giuseppe Maria Maruotti
- Gynecology and Obstetrics Unit, Department of Public Health, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Maruotti)
| |
Collapse
|
10
|
Fiorentino MC, Villani FP, Di Cosmo M, Frontoni E, Moccia S. A review on deep-learning algorithms for fetal ultrasound-image analysis. Med Image Anal 2023; 83:102629. [PMID: 36308861 DOI: 10.1016/j.media.2022.102629] [Citation(s) in RCA: 34] [Impact Index Per Article: 34.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 07/12/2022] [Accepted: 09/10/2022] [Indexed: 11/07/2022]
Abstract
Deep-learning (DL) algorithms are becoming the standard for processing ultrasound (US) fetal images. A number of survey papers in the field is today available, but most of them are focusing on a broader area of medical-image analysis or not covering all fetal US DL applications. This paper surveys the most recent work in the field, with a total of 153 research papers published after 2017. Papers are analyzed and commented from both the methodology and the application perspective. We categorized the papers into (i) fetal standard-plane detection, (ii) anatomical structure analysis and (iii) biometry parameter estimation. For each category, main limitations and open issues are presented. Summary tables are included to facilitate the comparison among the different approaches. In addition, emerging applications are also outlined. Publicly-available datasets and performance metrics commonly used to assess algorithm performance are summarized, too. This paper ends with a critical summary of the current state of the art on DL algorithms for fetal US image analysis and a discussion on current challenges that have to be tackled by researchers working in the field to translate the research methodology into actual clinical practice.
Collapse
Affiliation(s)
| | | | - Mariachiara Di Cosmo
- Department of Information Engineering, Università Politecnica delle Marche, Italy
| | - Emanuele Frontoni
- Department of Information Engineering, Università Politecnica delle Marche, Italy; Department of Political Sciences, Communication and International Relations, Università degli Studi di Macerata, Italy
| | - Sara Moccia
- The BioRobotics Institute and Department of Excellence in Robotics & AI, Scuola Superiore Sant'Anna, Italy
| |
Collapse
|
11
|
Li C, Li W, Liu C, Zheng H, Cai J, Wang S. Artificial intelligence in multi-parametric magnetic resonance imaging: A review. Med Phys 2022; 49:e1024-e1054. [PMID: 35980348 DOI: 10.1002/mp.15936] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 08/01/2022] [Accepted: 08/04/2022] [Indexed: 11/06/2022] Open
Abstract
Multi-parametric magnetic resonance imaging (mpMRI) is an indispensable tool in the clinical workflow for the diagnosis and treatment planning of various diseases. Machine learning-based artificial intelligence (AI) methods, especially those adopting the deep learning technique, have been extensively employed to perform mpMRI image classification, segmentation, registration, detection, reconstruction, and super-resolution. The current availability of increasing computational power and fast-improving AI algorithms have empowered numerous computer-based systems for applying mpMRI to disease diagnosis, imaging-guided radiotherapy, patient risk and overall survival time prediction, and the development of advanced quantitative imaging technology for magnetic resonance fingerprinting. However, the wide application of these developed systems in the clinic is still limited by a number of factors, including robustness, reliability, and interpretability. This survey aims to provide an overview for new researchers in the field as well as radiologists with the hope that they can understand the general concepts, main application scenarios, and remaining challenges of AI in mpMRI. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Cheng Li
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Wen Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Chenyang Liu
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Hairong Zheng
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.,Peng Cheng Laboratory, Shenzhen, 518066, China.,Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China
| |
Collapse
|
12
|
Alzubaidi M, Agus M, Alyafei K, Althelaya KA, Shah U, Abd-Alrazaq AA, Anbar M, Makhlouf M, Househ M. Towards deep observation: A systematic survey on artificial intelligence techniques to monitor fetus via Ultrasound Images. iScience 2022; 25:104713. [PMID: 35856024 PMCID: PMC9287600 DOI: 10.1016/j.isci.2022.104713] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 06/09/2022] [Accepted: 06/28/2022] [Indexed: 11/26/2022] Open
Abstract
Several reviews have been conducted regarding artificial intelligence (AI) techniques to improve pregnancy outcomes. But they are not focusing on ultrasound images. This survey aims to explore how AI can assist with fetal growth monitoring via ultrasound image. We reported our findings using the guidelines for PRISMA. We conducted a comprehensive search of eight bibliographic databases. Out of 1269 studies 107 are included. We found that 2D ultrasound images were more popular (88) than 3D and 4D ultrasound images (19). Classification is the most used method (42), followed by segmentation (31), classification integrated with segmentation (16) and other miscellaneous methods such as object-detection, regression, and reinforcement learning (18). The most common areas that gained traction within the pregnancy domain were the fetus head (43), fetus body (31), fetus heart (13), fetus abdomen (10), and the fetus face (10). This survey will promote the development of improved AI models for fetal clinical applications. Artificial intelligence studies to monitor fetal development via ultrasound images Fetal issues categorized based on four categories — general, head, heart, face, abdomen The most used AI techniques are classification, segmentation, object detection, and RL The research and practical implications are included.
Collapse
|
13
|
Deep Learning Approaches for Automatic Localization in Medical Images. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:6347307. [PMID: 35814554 PMCID: PMC9259335 DOI: 10.1155/2022/6347307] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Accepted: 05/23/2022] [Indexed: 12/21/2022]
Abstract
Recent revolutionary advances in deep learning (DL) have fueled several breakthrough achievements in various complicated computer vision tasks. The remarkable successes and achievements started in 2012 when deep learning neural networks (DNNs) outperformed the shallow machine learning models on a number of significant benchmarks. Significant advances were made in computer vision by conducting very complex image interpretation tasks with outstanding accuracy. These achievements have shown great promise in a wide variety of fields, especially in medical image analysis by creating opportunities to diagnose and treat diseases earlier. In recent years, the application of the DNN for object localization has gained the attention of researchers due to its success over conventional methods, especially in object localization. As this has become a very broad and rapidly growing field, this study presents a short review of DNN implementation for medical images and validates its efficacy on benchmarks. This study presents the first review that focuses on object localization using the DNN in medical images. The key aim of this study was to summarize the recent studies based on the DNN for medical image localization and to highlight the research gaps that can provide worthwhile ideas to shape future research related to object localization tasks. It starts with an overview on the importance of medical image analysis and existing technology in this space. The discussion then proceeds to the dominant DNN utilized in the current literature. Finally, we conclude by discussing the challenges associated with the application of the DNN for medical image localization which can drive further studies in identifying potential future developments in the relevant field of study.
Collapse
|
14
|
Atehortúa A, Romero E, Garreau M. Characterization of motion patterns by a spatio-temporal saliency descriptor in cardiac cine MRI. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 218:106714. [PMID: 35263659 DOI: 10.1016/j.cmpb.2022.106714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Revised: 02/03/2022] [Accepted: 02/23/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Abnormalities of the heart motion reveal the presence of a disease. However, a quantitative interpretation of the motion is still a challenge due to the complex dynamics of the heart. This work proposes a quantitative characterization of regional cardiac motion patterns in cine magnetic resonance imaging (MRI) by a novel spatio-temporal saliency descriptor. METHOD The strategy starts by dividing the cardiac sequence into a progression of scales which are in due turn mapped to a feature space of regional orientation changes, mimicking the multi-resolution decomposition of oriented primitive changes of visual systems. These changes are estimated as the difference between a particular time and the rest of the sequence. This decomposition is then temporarily and regionally integrated for a particular orientation and then for the set of different orientations. A final spatio-temporal 4D saliency map is obtained as the summation of the previously integrated information for the available scales. The saliency dispersion of this map was computed in standard cardiac locations as a measure of the regional motion pattern and was applied to discriminate control and hypertrophic cardiomyopathy (HCM) subjects during the diastolic phase. RESULTS Salient motion patterns were estimated from an experimental set, which consisted of 3D sequences acquired by MRI from 108 subjects (33 control, 35 HCM, 20 dilated cardiomyopathy (DCM), and 20 myocardial infarction (MINF) from heterogeneous datasets). HCM and control subjects were classified by an SVM that learned the salient motion patterns estimated from the presented strategy, by achieving a 94% AUC. In addition, statistical differences (test t-student, p<0.05) were found among groups of disease in the septal and anterior ventricular segments at both the ED and ES, with salient motion characteristics aligned with existing knowledge on the diseases. CONCLUSIONS Regional wall motion abnormality in the apical, anterior, basal, and inferior segments was associated with the saliency dispersion in HCM, DCM, and MINF compared to healthy controls during the systolic and diastolic phases. This saliency analysis may be used to detect subtle changes in heart function.
Collapse
Affiliation(s)
- Angélica Atehortúa
- Universidad Nacional de Colombia, Bogotá, Colombia; Univ Rennes, Inserm, LTSI UMR 1099, Rennes F-35000, France
| | | | | |
Collapse
|
15
|
Torres HR, Morais P, Oliveira B, Birdir C, Rüdiger M, Fonseca JC, Vilaça JL. A review of image processing methods for fetal head and brain analysis in ultrasound images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 215:106629. [PMID: 35065326 DOI: 10.1016/j.cmpb.2022.106629] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Revised: 12/20/2021] [Accepted: 01/08/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Examination of head shape and brain during the fetal period is paramount to evaluate head growth, predict neurodevelopment, and to diagnose fetal abnormalities. Prenatal ultrasound is the most used imaging modality to perform this evaluation. However, manual interpretation of these images is challenging and thus, image processing methods to aid this task have been proposed in the literature. This article aims to present a review of these state-of-the-art methods. METHODS In this work, it is intended to analyze and categorize the different image processing methods to evaluate fetal head and brain in ultrasound imaging. For that, a total of 109 articles published since 2010 were analyzed. Different applications are covered in this review, namely analysis of head shape and inner structures of the brain, standard clinical planes identification, fetal development analysis, and methods for image processing enhancement. RESULTS For each application, the reviewed techniques are categorized according to their theoretical approach, and the more suitable image processing methods to accurately analyze the head and brain are identified. Furthermore, future research needs are discussed. Finally, topics whose research is lacking in the literature are outlined, along with new fields of applications. CONCLUSIONS A multitude of image processing methods has been proposed for fetal head and brain analysis. Summarily, techniques from different categories showed their potential to improve clinical practice. Nevertheless, further research must be conducted to potentiate the current methods, especially for 3D imaging analysis and acquisition and for abnormality detection.
Collapse
Affiliation(s)
- Helena R Torres
- Algoritmi Center, School of Engineering, University of Minho, Guimarães, Portugal; Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal; ICVS/3B's - PT Government Associate Laboratory, Braga/Guimarães, Portugal; 2Ai - School of Technology, IPCA, Barcelos, Portugal.
| | - Pedro Morais
- 2Ai - School of Technology, IPCA, Barcelos, Portugal
| | - Bruno Oliveira
- Algoritmi Center, School of Engineering, University of Minho, Guimarães, Portugal; Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal; ICVS/3B's - PT Government Associate Laboratory, Braga/Guimarães, Portugal; 2Ai - School of Technology, IPCA, Barcelos, Portugal
| | - Cahit Birdir
- Department of Gynecology and Obstetrics, University Hospital Carl Gustav Carus, TU Dresden, Germany; Saxony Center for Feto-Neonatal Health, TU Dresden, Germany
| | - Mario Rüdiger
- Department for Neonatology and Pediatric Intensive Care, University Hospital Carl Gustav Carus, TU Dresden, Germany
| | - Jaime C Fonseca
- Algoritmi Center, School of Engineering, University of Minho, Guimarães, Portugal
| | - João L Vilaça
- 2Ai - School of Technology, IPCA, Barcelos, Portugal
| |
Collapse
|
16
|
Weichert J, Welp A, Scharf JL, Dracopoulos C, Becker WH, Gembicki M. The Use of Artificial Intelligence in Automation in the Fields of Gynaecology and Obstetrics - an Assessment of the State of Play. Geburtshilfe Frauenheilkd 2021; 81:1203-1216. [PMID: 34754270 PMCID: PMC8568505 DOI: 10.1055/a-1522-3029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Accepted: 06/01/2021] [Indexed: 11/20/2022] Open
Abstract
The long-awaited progress in digitalisation is generating huge amounts of medical data every day, and manual analysis and targeted, patient-oriented evaluation of this data is becoming increasingly difficult or even infeasible. This state of affairs and the associated, increasingly complex requirements for individualised precision medicine underline the need for modern software solutions and algorithms across the entire healthcare system. The utilisation of state-of-the-art equipment and techniques in almost all areas of medicine over the past few years has now indeed enabled automation processes to enter - at least in part - into routine clinical practice. Such systems utilise a wide variety of artificial intelligence (AI) techniques, the majority of which have been developed to optimise medical image reconstruction, noise reduction, quality assurance, triage, segmentation, computer-aided detection and classification and, as an emerging field of research, radiogenomics. Tasks handled by AI are completed significantly faster and more precisely, clearly demonstrated by now in the annual findings of the ImageNet Large-Scale Visual Recognition Challenge (ILSVCR), first conducted in 2015, with error rates well below those of humans. This review article will discuss the potential capabilities and currently available applications of AI in gynaecological-obstetric diagnostics. The article will focus, in particular, on automated techniques in prenatal sonographic diagnostics.
Collapse
Affiliation(s)
- Jan Weichert
- Klinik für Frauenheilkunde und Geburtshilfe, Bereich Pränatalmedizin und Spezielle Geburtshilfe, Universitätsklinikum Schleswig-Holstein, Campus Lübeck, Lübeck, Germany
- Zentrum für Pränatalmedizin an der Elbe, Hamburg, Germany
| | - Amrei Welp
- Klinik für Frauenheilkunde und Geburtshilfe, Bereich Pränatalmedizin und Spezielle Geburtshilfe, Universitätsklinikum Schleswig-Holstein, Campus Lübeck, Lübeck, Germany
| | - Jann Lennard Scharf
- Klinik für Frauenheilkunde und Geburtshilfe, Bereich Pränatalmedizin und Spezielle Geburtshilfe, Universitätsklinikum Schleswig-Holstein, Campus Lübeck, Lübeck, Germany
| | - Christoph Dracopoulos
- Klinik für Frauenheilkunde und Geburtshilfe, Bereich Pränatalmedizin und Spezielle Geburtshilfe, Universitätsklinikum Schleswig-Holstein, Campus Lübeck, Lübeck, Germany
| | | | - Michael Gembicki
- Klinik für Frauenheilkunde und Geburtshilfe, Bereich Pränatalmedizin und Spezielle Geburtshilfe, Universitätsklinikum Schleswig-Holstein, Campus Lübeck, Lübeck, Germany
| |
Collapse
|
17
|
Jackson P, Korte J, McIntosh L, Kron T, Ellul J, Li J, Hardcastle N. CT slice alignment to whole-body reference geometry by convolutional neural network. Phys Eng Sci Med 2021; 44:1213-1219. [PMID: 34505991 DOI: 10.1007/s13246-021-01056-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Accepted: 08/31/2021] [Indexed: 11/25/2022]
Abstract
Volumetric medical imaging lacks a standardised coordinate geometry which links image frame-of-reference to specific anatomical regions. This results in an inability to locate anatomy in medical images without visual assessment and precludes a variety of image analysis tasks which could benefit from a standardised, machine-readable coordinate system. In this work, a proposed geometric system that scales based on patient size is described and applied to a variety of cases in computed tomography imaging. Subsequently, a convolutional neural network is trained to associate axial slice CT image appearance with the standardised coordinate value along the patient superior-inferior axis. The trained neural network showed an accuracy of ± 12 mm in the ability to predict per-slice reference location and was relatively stable across all annotated regions ranging from brain to thighs. A version of the trained model along with scripts to perform network training in other applications are made available. Finally, a selection of potential use applications are illustrated including organ localisation, image registration initialisation, and scan length determination for auditing diagnostic reference levels.
Collapse
Affiliation(s)
- Price Jackson
- Department of Physical Sciences, Peter MacCallum Cancer Centre, Melbourne, 3000, Australia. .,Sir Peter MacCallum Department of Oncology, University of Melbourne, Melbourne, 3000, Australia.
| | - James Korte
- Department of Physical Sciences, Peter MacCallum Cancer Centre, Melbourne, 3000, Australia
| | - Lachlan McIntosh
- Department of Physical Sciences, Peter MacCallum Cancer Centre, Melbourne, 3000, Australia
| | - Tomas Kron
- Department of Physical Sciences, Peter MacCallum Cancer Centre, Melbourne, 3000, Australia.,Sir Peter MacCallum Department of Oncology, University of Melbourne, Melbourne, 3000, Australia
| | - Jason Ellul
- Department of Research Computing, Peter MacCallum Cancer Centre, Melbourne, 3000, Australia
| | - Jason Li
- Department of Biostatistics, Peter MacCallum Cancer Centre, Melbourne, 3000, Australia
| | - Nicholas Hardcastle
- Department of Physical Sciences, Peter MacCallum Cancer Centre, Melbourne, 3000, Australia.,Sir Peter MacCallum Department of Oncology, University of Melbourne, Melbourne, 3000, Australia
| |
Collapse
|
18
|
Yang X, Dou H, Huang R, Xue W, Huang Y, Qian J, Zhang Y, Luo H, Guo H, Wang T, Xiong Y, Ni D. Agent With Warm Start and Adaptive Dynamic Termination for Plane Localization in 3D Ultrasound. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1950-1961. [PMID: 33784618 DOI: 10.1109/tmi.2021.3069663] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Accurate standard plane (SP) localization is the fundamental step for prenatal ultrasound (US) diagnosis. Typically, dozens of US SPs are collected to determine the clinical diagnosis. 2D US has to perform scanning for each SP, which is time-consuming and operator-dependent. While 3D US containing multiple SPs in one shot has the inherent advantages of less user-dependency and more efficiency. Automatically locating SP in 3D US is very challenging due to the huge search space and large fetal posture variations. Our previous study proposed a deep reinforcement learning (RL) framework with an alignment module and active termination to localize SPs in 3D US automatically. However, termination of agent search in RL is important and affects the practical deployment. In this study, we enhance our previous RL framework with a newly designed adaptive dynamic termination to enable an early stop for the agent searching, saving at most 67% inference time, thus boosting the accuracy and efficiency of the RL framework at the same time. Besides, we validate the effectiveness and generalizability of our algorithm extensively on our in-house multi-organ datasets containing 433 fetal brain volumes, 519 fetal abdomen volumes, and 683 uterus volumes. Our approach achieves localization error of 2.52mm/10.26° , 2.48mm/10.39° , 2.02mm/10.48° , 2.00mm/14.57° , 2.61mm/9.71° , 3.09mm/9.58° , 1.49mm/7.54° for the transcerebellar, transventricular, transthalamic planes in fetal brain, abdominal plane in fetal abdomen, and mid-sagittal, transverse and coronal planes in uterus, respectively. Experimental results show that our method is general and has the potential to improve the efficiency and standardization of US scanning.
Collapse
|
19
|
Recognition of Thyroid Ultrasound Standard Plane Images Based on Residual Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:5598001. [PMID: 34188673 PMCID: PMC8192196 DOI: 10.1155/2021/5598001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/17/2021] [Revised: 04/27/2021] [Accepted: 05/14/2021] [Indexed: 01/22/2023]
Abstract
Ultrasound is one of the critical methods for diagnosis and treatment in thyroid examination. In clinical application, many reasons, such as large outpatient traffic, time-consuming training of sonographers, and uneven professional level of physicians, often cause irregularities during the ultrasonic examination, leading to misdiagnosis or missed diagnosis. In order to standardize the thyroid ultrasound examination process, this paper proposes using a deep learning method based on residual network to recognize the Thyroid Ultrasound Standard Plane (TUSP). At first, referring to multiple relevant guidelines, eight TUSP were determined with the advice of clinical ultrasound experts. A total of 5,500 TUSP images of 8 categories were collected with the approval and review of the Ethics Committee and the patient's informed consent. Then, after desensitizing and filling the images, the 18-layer residual network model (ResNet-18) was trained for TUSP image recognition, and five-fold cross-validation was performed. Finally, through indicators like accuracy rate, we compared the recognition effect of other mainstream deep convolutional neural network models. Experimental results showed that ResNet-18 has the best recognition effect on TUSP images with an average accuracy rate of 91.07%. The average macro precision, average macro recall, and average macro F1-score are 91.39%, 91.34%, and 91.30%, respectively. It proves that the deep learning method based on residual network can effectively recognize TUSP images, which is expected to standardize clinical thyroid ultrasound examination and reduce misdiagnosis and missed diagnosis.
Collapse
|
20
|
Shen YT, Chen L, Yue WW, Xu HX. Artificial intelligence in ultrasound. Eur J Radiol 2021; 139:109717. [PMID: 33962110 DOI: 10.1016/j.ejrad.2021.109717] [Citation(s) in RCA: 68] [Impact Index Per Article: 22.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 03/28/2021] [Accepted: 04/11/2021] [Indexed: 12/13/2022]
Abstract
Ultrasound (US), a flexible green imaging modality, is expanding globally as a first-line imaging technique in various clinical fields following with the continual emergence of advanced ultrasonic technologies and the well-established US-based digital health system. Actually, in US practice, qualified physicians should manually collect and visually evaluate images for the detection, identification and monitoring of diseases. The diagnostic performance is inevitably reduced due to the intrinsic property of high operator-dependence from US. In contrast, artificial intelligence (AI) excels at automatically recognizing complex patterns and providing quantitative assessment for imaging data, showing high potential to assist physicians in acquiring more accurate and reproducible results. In this article, we will provide a general understanding of AI, machine learning (ML) and deep learning (DL) technologies; We then review the rapidly growing applications of AI-especially DL technology in the field of US-based on the following anatomical regions: thyroid, breast, abdomen and pelvis, obstetrics heart and blood vessels, musculoskeletal system and other organs by covering image quality control, anatomy localization, object detection, lesion segmentation, and computer-aided diagnosis and prognosis evaluation; Finally, we offer our perspective on the challenges and opportunities for the clinical practice of biomedical AI systems in US.
Collapse
Affiliation(s)
- Yu-Ting Shen
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, Ultrasound Research and Education Institute, Tongji University School of Medicine, Tongji University Cancer Center, Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, National Clnical Research Center of Interventional Medicine, Shanghai, 200072, PR China
| | - Liang Chen
- Department of Gastroenterology, Shanghai Tenth People's Hospital, Tongji University School of Medicine, Shanghai, 200072, PR China
| | - Wen-Wen Yue
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, Ultrasound Research and Education Institute, Tongji University School of Medicine, Tongji University Cancer Center, Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, National Clnical Research Center of Interventional Medicine, Shanghai, 200072, PR China.
| | - Hui-Xiong Xu
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, Ultrasound Research and Education Institute, Tongji University School of Medicine, Tongji University Cancer Center, Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, National Clnical Research Center of Interventional Medicine, Shanghai, 200072, PR China.
| |
Collapse
|
21
|
Singh SP, Wang L, Gupta S, Goli H, Padmanabhan P, Gulyás B. 3D Deep Learning on Medical Images: A Review. SENSORS (BASEL, SWITZERLAND) 2020; 20:E5097. [PMID: 32906819 PMCID: PMC7570704 DOI: 10.3390/s20185097] [Citation(s) in RCA: 162] [Impact Index Per Article: 40.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/10/2020] [Revised: 08/31/2020] [Accepted: 09/03/2020] [Indexed: 12/20/2022]
Abstract
The rapid advancements in machine learning, graphics processing technologies and the availability of medical imaging data have led to a rapid increase in the use of deep learning models in the medical domain. This was exacerbated by the rapid advancements in convolutional neural network (CNN) based architectures, which were adopted by the medical imaging community to assist clinicians in disease diagnosis. Since the grand success of AlexNet in 2012, CNNs have been increasingly used in medical image analysis to improve the efficiency of human clinicians. In recent years, three-dimensional (3D) CNNs have been employed for the analysis of medical images. In this paper, we trace the history of how the 3D CNN was developed from its machine learning roots, we provide a brief mathematical description of 3D CNN and provide the preprocessing steps required for medical images before feeding them to 3D CNNs. We review the significant research in the field of 3D medical imaging analysis using 3D CNNs (and its variants) in different medical areas such as classification, segmentation, detection and localization. We conclude by discussing the challenges associated with the use of 3D CNNs in the medical imaging domain (and the use of deep learning models in general) and possible future trends in the field.
Collapse
Affiliation(s)
- Satya P. Singh
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore 608232, Singapore; (S.P.S.); (B.G.)
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore 636921, Singapore
| | - Lipo Wang
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore;
| | - Sukrit Gupta
- School of Computer Science and Engineering, Nanyang Technological University, Singapore 639798, Singapore; (S.G.); (H.G.)
| | - Haveesh Goli
- School of Computer Science and Engineering, Nanyang Technological University, Singapore 639798, Singapore; (S.G.); (H.G.)
| | - Parasuraman Padmanabhan
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore 608232, Singapore; (S.P.S.); (B.G.)
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore 636921, Singapore
| | - Balázs Gulyás
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore 608232, Singapore; (S.P.S.); (B.G.)
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore 636921, Singapore
- Department of Clinical Neuroscience, Karolinska Institute, 17176 Stockholm, Sweden
| |
Collapse
|
22
|
Garcia-Canadilla P, Sanchez-Martinez S, Crispi F, Bijnens B. Machine Learning in Fetal Cardiology: What to Expect. Fetal Diagn Ther 2020; 47:363-372. [PMID: 31910421 DOI: 10.1159/000505021] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2019] [Accepted: 11/25/2019] [Indexed: 11/19/2022]
Abstract
In fetal cardiology, imaging (especially echocardiography) has demonstrated to help in the diagnosis and monitoring of fetuses with a compromised cardiovascular system potentially associated with several fetal conditions. Different ultrasound approaches are currently used to evaluate fetal cardiac structure and function, including conventional 2-D imaging and M-mode and tissue Doppler imaging among others. However, assessment of the fetal heart is still challenging mainly due to involuntary movements of the fetus, the small size of the heart, and the lack of expertise in fetal echocardiography of some sonographers. Therefore, the use of new technologies to improve the primary acquired images, to help extract measurements, or to aid in the diagnosis of cardiac abnormalities is of great importance for optimal assessment of the fetal heart. Machine leaning (ML) is a computer science discipline focused on teaching a computer to perform tasks with specific goals without explicitly programming the rules on how to perform this task. In this review we provide a brief overview on the potential of ML techniques to improve the evaluation of fetal cardiac function by optimizing image acquisition and quantification/segmentation, as well as aid in improving the prenatal diagnoses of fetal cardiac remodeling and abnormalities.
Collapse
Affiliation(s)
- Patricia Garcia-Canadilla
- Institut d'Investigacions Biomèdiques August Pi i Sunyer, Barcelona, Spain, .,Institute of Cardiovascular Science, University College London, London, United Kingdom,
| | | | - Fatima Crispi
- Institut d'Investigacions Biomèdiques August Pi i Sunyer, Barcelona, Spain.,Fetal Medicine Research Center, BCNatal - Barcelona Center for Maternal-Fetal and Neonatal Medicine (Hospital Clínic and Hospital Sant Joan de Déu), Institut Clínic de Ginecologia Obstetricia i Neonatologia, Centre for Biomedical Research on Rare Diseases (CIBER-ER), Barcelona, Spain
| | - Bart Bijnens
- Institut d'Investigacions Biomèdiques August Pi i Sunyer, Barcelona, Spain.,Department of Cardiovascular Sciences, KU Leuven, Leuven, Belgium.,ICREA, Barcelona, Spain
| |
Collapse
|
23
|
Ambroise Grandjean G, Hossu G, Banasiak C, Ciofolo-Veit C, Raynaud C, Rouet L, Morel O, Beaumont M. Optimization of Fetal Biometry With 3D Ultrasound and Image Recognition (EPICEA): protocol for a prospective cross-sectional study. BMJ Open 2019; 9:e031777. [PMID: 31843832 PMCID: PMC6924693 DOI: 10.1136/bmjopen-2019-031777] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
CONTEXT Variability in 2D ultrasound (US) is related to the acquisition of planes of reference and the positioning of callipers and could be reduced in combining US volume acquisitions and anatomical structures recognition. OBJECTIVES The primary objective is to assess the consistency between 3D measurements (automated and manual) extracted from a fetal US volume with standard 2D US measurements (I). Secondary objectives are to evaluate the feasibility of the use of software to obtain automated measurements of the fetal head, abdomen and femur from US acquisitions (II) and to assess the impact of automation on intraobserver and interobserver reproducibility (III). METHODS AND ANALYSIS 225 fetuses will be measured at 16-30 weeks of gestation. For each fetus, six volumes (two for head, abdomen and thigh, respectively) will be prospectively acquired after performing standard 2D biometry measurements (head and abdominal circumference, femoral length). Each volume will be processed later by both a software and an operator to extract the reference planes and to perform the corresponding measurements. The different sets of measurements will be compared using Bland-Altman plots to assess the agreement between the different processes (I). The feasibility of using the software in clinical practice will be assessed through the failure rate of processing and the score of quality of measurements (II). Interclass correlation coefficients will be used to evaluate the intraobserver and interobserver reproducibility (III). ETHICS AND DISSEMINATION The study and related consent forms were approved by an institutional review board (CPP SUD-EST 3) on 2 October 2018, under reference number 2018-033 B. The study has been registered in https://clinicaltrials.gov registry on 23 January 2019, under the number NCT03812471. This study will enable an improved understanding and dissemination of the potential benefits of 3D automated measurements and is a prerequisite for the design of intention to treat randomised studies assessing their impact. TRIAL REGISTRATION NUMBER NCT03812471; Pre-results.
Collapse
Affiliation(s)
- Gaëlle Ambroise Grandjean
- Obstetrics Department, CHRU Nancy, Nancy, Lorraine, France
- Midwifery Department, Université de Lorraine, Nancy, France
- Inserm IADI, Université de Lorraine, Nancy, France
| | - Gabriela Hossu
- CIC-IT, CHRU Nancy, Université de Lorraine, Nancy, France
| | | | | | | | | | - Olivier Morel
- Obstetrics Department, CHRU Nancy, Nancy, Lorraine, France
- Inserm IADI, Université de Lorraine, Nancy, France
| | | |
Collapse
|
24
|
Lin Z, Li S, Ni D, Liao Y, Wen H, Du J, Chen S, Wang T, Lei B. Multi-task learning for quality assessment of fetal head ultrasound images. Med Image Anal 2019; 58:101548. [PMID: 31525671 DOI: 10.1016/j.media.2019.101548] [Citation(s) in RCA: 43] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2019] [Revised: 07/15/2019] [Accepted: 08/23/2019] [Indexed: 11/26/2022]
Abstract
It is essential to measure anatomical parameters in prenatal ultrasound images for the growth and development of the fetus, which is highly relied on obtaining a standard plane. However, the acquisition of a standard plane is, in turn, highly subjective and depends on the clinical experience of sonographers. In order to deal with this challenge, we propose a new multi-task learning framework using a faster regional convolutional neural network (MF R-CNN) architecture for standard plane detection and quality assessment. MF R-CNN can identify the critical anatomical structure of the fetal head and analyze whether the magnification of the ultrasound image is appropriate, and then performs quality assessment of ultrasound images based on clinical protocols. Specifically, the first five convolution blocks of the MF R-CNN learn the features shared within the input data, which can be associated with the detection and classification tasks, and then extend to the task-specific output streams. In training, in order to speed up the different convergence of different tasks, we devise a section train method based on transfer learning. In addition, our proposed method also uses prior clinical and statistical knowledge to reduce the false detection rate. By identifying the key anatomical structure and magnification of the ultrasound image, we score the ultrasonic plane of fetal head to judge whether it is a standard image or not. Experimental results on our own-collected dataset show that our method can accurately make a quality assessment of an ultrasound plane within half a second. Our method achieves promising performance compared with state-of-the-art methods, which can improve the examination effectiveness and alleviate the measurement error caused by improper ultrasound scanning.
Collapse
Affiliation(s)
- Zehui Lin
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518060, China
| | - Shengli Li
- Department of Ultrasound, Affiliated Shenzhen Maternal and Child Healthcare Hospital of Nanfang Medical University, 3012 Fuqiang Rd, Shenzhen, 518060, China
| | - Dong Ni
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518060, China
| | - Yimei Liao
- Department of Ultrasound, Affiliated Shenzhen Maternal and Child Healthcare Hospital of Nanfang Medical University, 3012 Fuqiang Rd, Shenzhen, 518060, China
| | - Huaxuan Wen
- Department of Ultrasound, Affiliated Shenzhen Maternal and Child Healthcare Hospital of Nanfang Medical University, 3012 Fuqiang Rd, Shenzhen, 518060, China
| | - Jie Du
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518060, China
| | - Siping Chen
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518060, China
| | - Tianfu Wang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518060, China.
| | - Baiying Lei
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518060, China.
| |
Collapse
|