1
|
Yeung PH, Hesse LS, Aliasi M, Haak MC, Xie W, Namburete AIL. Sensorless volumetric reconstruction of fetal brain freehand ultrasound scans with deep implicit representation. Med Image Anal 2024; 94:103147. [PMID: 38547665 DOI: 10.1016/j.media.2024.103147] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 02/14/2024] [Accepted: 03/20/2024] [Indexed: 04/16/2024]
Abstract
Three-dimensional (3D) ultrasound imaging has contributed to our understanding of fetal developmental processes by providing rich contextual information of the inherently 3D anatomies. However, its use is limited in clinical settings, due to the high purchasing costs and limited diagnostic practicality. Freehand 2D ultrasound imaging, in contrast, is routinely used in standard obstetric exams, but inherently lacks a 3D representation of the anatomies, which limits its potential for more advanced assessment. Such full representations are challenging to recover even with external tracking devices due to internal fetal movement which is independent from the operator-led trajectory of the probe. Capitalizing on the flexibility offered by freehand 2D ultrasound acquisition, we propose ImplicitVol to reconstruct 3D volumes from non-sensor-tracked 2D ultrasound sweeps. Conventionally, reconstructions are performed on a discrete voxel grid. We, however, employ a deep neural network to represent, for the first time, the reconstructed volume as an implicit function. Specifically, ImplicitVol takes a set of 2D images as input, predicts their locations in 3D space, jointly refines the inferred locations, and learns a full volumetric reconstruction. When testing natively-acquired and volume-sampled 2D ultrasound video sequences collected from different manufacturers, the 3D volumes reconstructed by ImplicitVol show significantly better visual and semantic quality than the existing interpolation-based reconstruction approaches. The inherent continuity of implicit representation also enables ImplicitVol to reconstruct the volume to arbitrarily high resolutions. As formulated, ImplicitVol has the potential to integrate seamlessly into the clinical workflow, while providing richer information for diagnosis and evaluation of the developing brain.
Collapse
Affiliation(s)
- Pak-Hei Yeung
- Department of Engineering Science, Institute of Biomedical Engineering, University of Oxford, Oxford, United Kingdom; Oxford Machine Learning in NeuroImaging Lab, Department of Computer Science, University of Oxford, OX1 3QD, United Kingdom.
| | - Linde S Hesse
- Department of Engineering Science, Institute of Biomedical Engineering, University of Oxford, Oxford, United Kingdom; Oxford Machine Learning in NeuroImaging Lab, Department of Computer Science, University of Oxford, OX1 3QD, United Kingdom
| | - Moska Aliasi
- Division of Fetal Medicine, Department of Obstetrics, Leiden University Medical Center, 2333 ZA Leiden, The Netherlands
| | - Monique C Haak
- Division of Fetal Medicine, Department of Obstetrics, Leiden University Medical Center, 2333 ZA Leiden, The Netherlands
| | - Weidi Xie
- Shanghai Jiao Tong University, Shanghai, 200240, China; Visual Geometry Group, Department of Engineering Science, University of Oxford, Oxford, United Kingdom
| | - Ana I L Namburete
- Oxford Machine Learning in NeuroImaging Lab, Department of Computer Science, University of Oxford, OX1 3QD, United Kingdom
| |
Collapse
|
2
|
Belciug S. Autonomous fetal morphology scan: deep learning + clustering merger - the second pair of eyes behind the doctor. BMC Med Inform Decis Mak 2024; 24:102. [PMID: 38641580 PMCID: PMC11027391 DOI: 10.1186/s12911-024-02505-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Accepted: 04/12/2024] [Indexed: 04/21/2024] Open
Abstract
The main cause of fetal death, of infant morbidity or mortality during childhood years is attributed to congenital anomalies. They can be detected through a fetal morphology scan. An experienced sonographer (with more than 2000 performed scans) has the detection rate of congenital anomalies around 52%. The rates go down in the case of a junior sonographer, that has the detection rate of 32.5%. One viable solution to improve these performances is to use Artificial Intelligence. The first step in a fetal morphology scan is represented by the differentiation process between the view planes of the fetus, followed by a segmentation of the internal organs in each view plane. This study presents an Artificial Intelligence empowered decision support system that can label anatomical organs using a merger between deep learning and clustering techniques, followed by an organ segmentation with YOLO8. Our framework was tested on a fetal morphology image dataset that regards the fetal abdomen. The experimental results show that the system can correctly label the view plane and the corresponding organs on real-time ultrasound movies.Trial registrationThe study is registered under the name "Pattern recognition and Anomaly Detection in fetal morphology using Deep Learning and Statistical Learning (PARADISE)", project number 101PCE/2022, project code PN-III-P4-PCE-2021-0057. Trial registration: ClinicalTrials.gov, unique identifying number NCT05738954, date of registration 02.11.2023.
Collapse
Affiliation(s)
- Smaranda Belciug
- Department of Computer Science, Faculty of Sciences, University of Craiova, 200585, Craiova, Romania.
| |
Collapse
|
3
|
Devisri B, Kavitha M. Fetal growth analysis from ultrasound videos based on different biometrics using optimal segmentation and hybrid classifier. Stat Med 2024; 43:1019-1047. [PMID: 38155152 DOI: 10.1002/sim.9995] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 12/04/2023] [Accepted: 12/04/2023] [Indexed: 12/30/2023]
Abstract
Birth defects and their associated deaths, high health and financial costs of maternal care and associated morbidity are major contributors to infant mortality. If permitted by law, prenatal diagnosis allows for intrauterine care, more complicated hospital deliveries, and termination of pregnancy. During pregnancy, a set of measurements is commonly used to monitor the fetal health, including fetal head circumference, crown-rump length, abdominal circumference, and femur length. Because of the intricate interactions between the biological tissues and the US waves mother and fetus, analyzing fetal US images from a specialized perspective is difficult. Artifacts include acoustic shadows, speckle noise, motion blur, and missing borders. The fetus moves quickly, body structures close, and the weeks of pregnancy vary greatly. In this work, we propose a fetal growth analysis through US image of head circumference biometry using optimal segmentation and hybrid classifier. First, we introduce a hybrid whale with oppositional fruit fly optimization (WOFF) algorithm for optimal segmentation of segment fetal head which improves the detection accuracy. Next, an improved U-Net design is utilized for the hidden feature (head circumference biometry) extraction which extracts features from the segmented extraction. Then, we design a modified Boosting arithmetic optimization (MBAO) algorithm for feature optimization to selects optimal best features among multiple features for the reduction of data dimensionality issues. Furthermore, a hybrid deep learning technique called bi-directional LSTM with convolutional neural network (B-LSTM-CNN) for fetal growth analysis to compute the fetus growth and health. Finally, we validate our proposed method through the open benchmark datasets are HC18 (Ultrasound image) and oxford university research archive (ORA-data) (Ultrasound video frames). We compared the simulation results of our proposed algorithm with the existing state-of-art techniques in terms of various metrics.
Collapse
Affiliation(s)
- B Devisri
- Department of Electronics and communication Engineering, K. Ramakrishnan College of Technology, (Affiliated to Anna University Chennai), Trichy, India
| | - M Kavitha
- Department of Electronics and Communication Engineering, K. Ramakrishnan College of Technology, Trichy, India
| |
Collapse
|
4
|
Belciug S, Ivanescu RC, Serbanescu MS, Ispas F, Nagy R, Comanescu CM, Istrate-Ofiteru A, Iliescu DG. Pattern Recognition and Anomaly Detection in fetal morphology using Deep Learning and Statistical learning (PARADISE): protocol for the development of an intelligent decision support system using fetal morphology ultrasound scan to detect fetal congenital anomaly detection. BMJ Open 2024; 14:e077366. [PMID: 38365300 PMCID: PMC10875539 DOI: 10.1136/bmjopen-2023-077366] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Accepted: 01/26/2024] [Indexed: 02/18/2024] Open
Abstract
INTRODUCTION Congenital anomalies are the most encountered cause of fetal death, infant mortality and morbidity. 7.9 million infants are born with congenital anomalies yearly. Early detection of congenital anomalies facilitates life-saving treatments and stops the progression of disabilities. Congenital anomalies can be diagnosed prenatally through morphology scans. A correct interpretation of the morphology scan allows a detailed discussion with the parents regarding the prognosis. The central feature of this project is the development of a specialised intelligent system that uses two-dimensional ultrasound movies obtained during the standard second trimester morphology scan to identify congenital anomalies in fetuses. METHODS AND ANALYSIS The project focuses on three pillars: committee of deep learning and statistical learning algorithms, statistical analysis, and operational research through learning curves. The cross-sectional study is divided into a training phase where the system learns to detect congenital anomalies using fetal morphology ultrasound scan, and then it is tested on previously unseen scans. In the training phase, the intelligent system will learn to answer the following specific objectives: (a) the system will learn to guide the sonographer's probe for better acquisition; (b) the fetal planes will be automatically detected, measured and stored and (c) unusual findings will be signalled. During the testing phase, the system will automatically perform the above tasks on previously unseen videos.Pregnant patients in their second trimester admitted for their routine scan will be consecutively included in a 32-month study (4 May 2022-31 December 2024). The number of patients is 4000, enrolled by 10 doctors/sonographers. We will develop an intelligent system that uses multiple artificial intelligence algorithms that interact between themselves, in bulk or individual. For each anatomical part, there will be an algorithm in charge of detecting it, followed by another algorithm that will detect whether anomalies are present or not. The sonographers will validate the findings at each intermediate step. ETHICS AND DISSEMINATION All protocols and the informed consent form comply with the Health Ministry and professional society ethics guidelines. The University of Craiova Ethics Committee has approved this study protocol as well as the Romanian Ministry of Research Innovation and Digitization that funded this research. The study will be implemented and reported in line with the STROBE (STrengthening the Reporting of OBservational studies in Epidemiology) statement. TRIAL REGISTRATION NUMBER The study is registered under the name 'Pattern recognition and Anomaly Detection in fetal morphology using Deep Learning and Statistical Learning', project number 101PCE/2022, project code PN-III-P4-PCE-2021-0057. TRIAL REGISTRATION ClinicalTrials.gov, unique identifying number NCT05738954, date of registration: 2 November 2023.
Collapse
Affiliation(s)
- Smaranda Belciug
- Department of Computer Science, University of Craiova, Craiova, Romania
| | | | | | - Florin Ispas
- Department of Computer Science, University of Craiova, Craiova, Romania
| | - Rodica Nagy
- University of Medicine and Pharmacy of Craiova, Craiova, Romania
| | | | | | | |
Collapse
|
5
|
Vece CD, Lous ML, Dromey B, Vasconcelos F, David AL, Peebles D, Stoyanov D. Ultrasound Plane Pose Regression: Assessing Generalized Pose Coordinates in the Fetal Brain. IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS 2024; 6:41-52. [PMID: 38881728 PMCID: PMC7616102 DOI: 10.1109/tmrb.2023.3328638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/18/2024]
Abstract
In obstetric ultrasound (US) scanning, the learner's ability to mentally build a three-dimensional (3D) map of the fetus from a two-dimensional (2D) US image represents a significant challenge in skill acquisition. We aim to build a US plane localization system for 3D visualization, training, and guidance without integrating additional sensors. This work builds on top of our previous work, which predicts the six-dimensional (6D) pose of arbitrarily oriented US planes slicing the fetal brain with respect to a normalized reference frame using a convolutional neural network (CNN) regression network. Here, we analyze in detail the assumptions of the normalized fetal brain reference frame and quantify its accuracy with respect to the acquisition of transventricular (TV) standard plane (SP) for fetal biometry. We investigate the impact of registration quality in the training and testing data and its subsequent effect on trained models. Finally, we introduce data augmentations and larger training sets that improve the results of our previous work, achieving median errors of 2.97 mm and 6.63° for translation and rotation, respectively.
Collapse
Affiliation(s)
- Chiara Di Vece
- EPSRC Center for Interventional and Surgical Sciences and the Department of Computer Science, University College London, WC1E 6DB London, U.K
| | - Maela Le Lous
- WEISS, Elizabeth Garrett Anderson Institute for Women's Health, and the NIHR University College London Hospitals Biomedical Research Center, University College London, WC1E 6DB London, U.K
| | - Brian Dromey
- WEISS, Elizabeth Garrett Anderson Institute for Women's Health, and the NIHR University College London Hospitals Biomedical Research Center, University College London, WC1E 6DB London, U.K
| | - Francisco Vasconcelos
- EPSRC Center for Interventional and Surgical Sciences and the Department of Computer Science, University College London, WC1E 6DB London, U.K
| | - Anna L David
- WEISS, Elizabeth Garrett Anderson Institute for Women's Health, and the NIHR University College London Hospitals Biomedical Research Center, University College London, WC1E 6DB London, U.K
| | - Donald Peebles
- WEISS, Elizabeth Garrett Anderson Institute for Women's Health, and the NIHR University College London Hospitals Biomedical Research Center, University College London, WC1E 6DB London, U.K
| | - Danail Stoyanov
- EPSRC Center for Interventional and Surgical Sciences and the Department of Computer Science, University College London, WC1E 6DB London, U.K
| |
Collapse
|
6
|
Jafrasteh B, Lubián-López SP, Benavente-Fernández I. A deep sift convolutional neural networks for total brain volume estimation from 3D ultrasound images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 242:107805. [PMID: 37738840 DOI: 10.1016/j.cmpb.2023.107805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 08/04/2023] [Accepted: 09/07/2023] [Indexed: 09/24/2023]
Abstract
Preterm infants are a highly vulnerable population. The total brain volume (TBV) of these infants can be accurately estimated by brain ultrasound (US) imaging which enables a longitudinal study of early brain growth during Neonatal Intensive Care (NICU) admission. Automatic estimation of TBV from 3D images increases the diagnosis speed and evades the necessity for an expert to manually segment 3D images, which is a sophisticated and time consuming task. We develop a deep-learning approach to estimate TBV from 3D ultrasound images. It benefits from deep convolutional neural networks (CNN) with dilated residual connections and an additional layer, inspired by the fuzzy c-Means (FCM), to further separate the features into different regions, i.e. sift layer. Therefore, we call this method deep-sift convolutional neural networks (DSCNN). The proposed method is validated against three state-of-the-art methods including AlexNet-3D, ResNet-3D, and VGG-3D, for TBV estimation using two datasets acquired from two different ultrasound devices. The results highlight a strong correlation between the predictions and the observed TBV values. The regression activation maps are used to interpret DSCNN, allowing TBV estimation by exploring those pixels that are more consistent and plausible from an anatomical standpoint. Therefore, it can be used for direct estimation of TBV from 3D images without needing further image segmentation.
Collapse
Affiliation(s)
- Bahram Jafrasteh
- Biomedical Research and Innovation Institute of Cádiz (INiBICA), Puerta del Mar University, Cádiz, Spain.
| | - Simón Pedro Lubián-López
- Biomedical Research and Innovation Institute of Cádiz (INiBICA), Puerta del Mar University, Cádiz, Spain; Division of Neonatology, Department of Paediatrics, Puerta del Mar University Hospital, Cádiz, Spain.
| | - Isabel Benavente-Fernández
- Biomedical Research and Innovation Institute of Cádiz (INiBICA), Puerta del Mar University, Cádiz, Spain; Division of Neonatology, Department of Paediatrics, Puerta del Mar University Hospital, Cádiz, Spain; Area of Paediatrics, Department of Child and Mother Health and Radiology, Medical School, University of Cádiz, Cádiz, Spain.
| |
Collapse
|
7
|
Namburete AIL, Papież BW, Fernandes M, Wyburd MK, Hesse LS, Moser FA, Ismail LC, Gunier RB, Squier W, Ohuma EO, Carvalho M, Jaffer Y, Gravett M, Wu Q, Lambert A, Winsey A, Restrepo-Méndez MC, Bertino E, Purwar M, Barros FC, Stein A, Noble JA, Molnár Z, Jenkinson M, Bhutta ZA, Papageorghiou AT, Villar J, Kennedy SH. Normative spatiotemporal fetal brain maturation with satisfactory development at 2 years. Nature 2023; 623:106-114. [PMID: 37880365 PMCID: PMC10620088 DOI: 10.1038/s41586-023-06630-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Accepted: 09/08/2023] [Indexed: 10/27/2023]
Abstract
Maturation of the human fetal brain should follow precisely scheduled structural growth and folding of the cerebral cortex for optimal postnatal function1. We present a normative digital atlas of fetal brain maturation based on a prospective international cohort of healthy pregnant women2, selected using World Health Organization recommendations for growth standards3. Their fetuses were accurately dated in the first trimester, with satisfactory growth and neurodevelopment from early pregnancy to 2 years of age4,5. The atlas was produced using 1,059 optimal quality, three-dimensional ultrasound brain volumes from 899 of the fetuses and an automated analysis pipeline6-8. The atlas corresponds structurally to published magnetic resonance images9, but with finer anatomical details in deep grey matter. The between-study site variability represented less than 8.0% of the total variance of all brain measures, supporting pooling data from the eight study sites to produce patterns of normative maturation. We have thereby generated an average representation of each cerebral hemisphere between 14 and 31 weeks' gestation with quantification of intracranial volume variability and growth patterns. Emergent asymmetries were detectable from as early as 14 weeks, with peak asymmetries in regions associated with language development and functional lateralization between 20 and 26 weeks' gestation. These patterns were validated in 1,487 three-dimensional brain volumes from 1,295 different fetuses in the same cohort. We provide a unique spatiotemporal benchmark of fetal brain maturation from a large cohort with normative postnatal growth and neurodevelopment.
Collapse
Affiliation(s)
- Ana I L Namburete
- Oxford Machine Learning in Neuroimaging Laboratory, Department of Computer Science, University of Oxford, Oxford, UK.
- Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, UK.
- Department of Engineering Science, University of Oxford, Oxford, UK.
| | - Bartłomiej W Papież
- Big Data Institute, Li Ka Shing Centre for Health Information and Discovery, University of Oxford, Oxford, UK
| | - Michelle Fernandes
- Nuffield Department of Women's and Reproductive Health, University of Oxford, Oxford, UK
- MRC Lifecourse Epidemiology Centre, Human Development and Health Academic Unit, Faculty of Medicine, University of Southampton, Southampton, UK
- Oxford Maternal and Perinatal Health Institute, Green Templeton College, University of Oxford, Oxford, UK
| | - Madeleine K Wyburd
- Oxford Machine Learning in Neuroimaging Laboratory, Department of Computer Science, University of Oxford, Oxford, UK
| | - Linde S Hesse
- Oxford Machine Learning in Neuroimaging Laboratory, Department of Computer Science, University of Oxford, Oxford, UK
- Department of Engineering Science, University of Oxford, Oxford, UK
| | - Felipe A Moser
- Oxford Machine Learning in Neuroimaging Laboratory, Department of Computer Science, University of Oxford, Oxford, UK
| | - Leila Cheikh Ismail
- Department of Clinical Nutrition and Dietetics, College of Health Sciences, University of Sharjah, Sharjah, United Arab Emirates
| | - Robert B Gunier
- Center for Environmental Research and Children's Health, School of Public Health, University of California, Berkeley, CA, USA
| | - Waney Squier
- Department of Neuropathology, John Radcliffe Hospital, Oxford, UK
| | - Eric O Ohuma
- Nuffield Department of Women's and Reproductive Health, University of Oxford, Oxford, UK
- Maternal, Adolescent, Reproductive and Child Health Centre, London School of Hygiene and Tropical Medicine, London, UK
| | - Maria Carvalho
- Department of Obstetrics and Gynaecology, Faculty of Health Sciences, Aga Khan University Hospital, Nairobi, Kenya
| | - Yasmin Jaffer
- Department of Family and Community Health, Ministry of Health, Muscat, Sultanate of Oman
| | - Michael Gravett
- Departments of Obstetrics and Gynecology and of Global Health, University of Washington, Seattle, WA, USA
| | - Qingqing Wu
- School of Public Health, Peking University, Beijing, China
| | - Ann Lambert
- Nuffield Department of Women's and Reproductive Health, University of Oxford, Oxford, UK
- Oxford Maternal and Perinatal Health Institute, Green Templeton College, University of Oxford, Oxford, UK
| | - Adele Winsey
- Nuffield Department of Women's and Reproductive Health, University of Oxford, Oxford, UK
| | | | - Enrico Bertino
- Dipartimento di Scienze Pediatriche e dell' Adolescenza, SCDU Neonatologia, Universita di Torino, Turin, Italy
| | - Manorama Purwar
- Nagpur INTERGROWTH-21st Research Centre, Ketkar Hospital, Nagpur, India
| | - Fernando C Barros
- Programa de Pós-Graduação em Saúde e Comportamento, Universidade Católica de Pelotas, Pelotas, Brazil
| | - Alan Stein
- Department of Psychiatry, University of Oxford, Oxford, UK
- African Health Research Institute, KwaZulu-Natal, South Africa
- MRC/Wits Rural Public Health and Health Transitions Research Unit (Agincourt), School of Public Health, Faculty of Health Sciences, University of Witwatersrand, Johannesburg, South Africa
| | - J Alison Noble
- Department of Engineering Science, University of Oxford, Oxford, UK
| | - Zoltán Molnár
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, UK
| | - Mark Jenkinson
- Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, UK
- Australian Institute for Machine Learning, Department of Computer Science, University of Adelaide, Adelaide, South Australia, Australia
- South Australian Health and Medical Research Institute, Adelaide, South Australia, Australia
| | - Zulfiqar A Bhutta
- Center for Global Child Health, Hospital for Sick Children, Toronto, Ontario, Canada
| | - Aris T Papageorghiou
- Nuffield Department of Women's and Reproductive Health, University of Oxford, Oxford, UK
- Oxford Maternal and Perinatal Health Institute, Green Templeton College, University of Oxford, Oxford, UK
| | - José Villar
- Nuffield Department of Women's and Reproductive Health, University of Oxford, Oxford, UK
- Oxford Maternal and Perinatal Health Institute, Green Templeton College, University of Oxford, Oxford, UK
| | - Stephen H Kennedy
- Nuffield Department of Women's and Reproductive Health, University of Oxford, Oxford, UK
- Oxford Maternal and Perinatal Health Institute, Green Templeton College, University of Oxford, Oxford, UK
| |
Collapse
|
8
|
Jost E, Kosian P, Jimenez Cruz J, Albarqouni S, Gembruch U, Strizek B, Recker F. Evolving the Era of 5D Ultrasound? A Systematic Literature Review on the Applications for Artificial Intelligence Ultrasound Imaging in Obstetrics and Gynecology. J Clin Med 2023; 12:6833. [PMID: 37959298 PMCID: PMC10649694 DOI: 10.3390/jcm12216833] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 10/17/2023] [Accepted: 10/25/2023] [Indexed: 11/15/2023] Open
Abstract
Artificial intelligence (AI) has gained prominence in medical imaging, particularly in obstetrics and gynecology (OB/GYN), where ultrasound (US) is the preferred method. It is considered cost effective and easily accessible but is time consuming and hindered by the need for specialized training. To overcome these limitations, AI models have been proposed for automated plane acquisition, anatomical measurements, and pathology detection. This study aims to overview recent literature on AI applications in OB/GYN US imaging, highlighting their benefits and limitations. For the methodology, a systematic literature search was performed in the PubMed and Cochrane Library databases. Matching abstracts were screened based on the PICOS (Participants, Intervention or Exposure, Comparison, Outcome, Study type) scheme. Articles with full text copies were distributed to the sections of OB/GYN and their research topics. As a result, this review includes 189 articles published from 1994 to 2023. Among these, 148 focus on obstetrics and 41 on gynecology. AI-assisted US applications span fetal biometry, echocardiography, or neurosonography, as well as the identification of adnexal and breast masses, and assessment of the endometrium and pelvic floor. To conclude, the applications for AI-assisted US in OB/GYN are abundant, especially in the subspecialty of obstetrics. However, while most studies focus on common application fields such as fetal biometry, this review outlines emerging and still experimental fields to promote further research.
Collapse
Affiliation(s)
- Elena Jost
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Philipp Kosian
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Jorge Jimenez Cruz
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Shadi Albarqouni
- Department of Diagnostic and Interventional Radiology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
- Helmholtz AI, Helmholtz Munich, Ingolstädter Landstraße 1, 85764 Neuherberg, Germany
| | - Ulrich Gembruch
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Brigitte Strizek
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Florian Recker
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| |
Collapse
|
9
|
Wright R, Gomez A, Zimmer VA, Toussaint N, Khanal B, Matthew J, Skelton E, Kainz B, Rueckert D, Hajnal JV, Schnabel JA. Fast fetal head compounding from multi-view 3D ultrasound. Med Image Anal 2023; 89:102793. [PMID: 37482034 DOI: 10.1016/j.media.2023.102793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 02/26/2023] [Accepted: 03/06/2023] [Indexed: 07/25/2023]
Abstract
The diagnostic value of ultrasound images may be limited by the presence of artefacts, notably acoustic shadows, lack of contrast and localised signal dropout. Some of these artefacts are dependent on probe orientation and scan technique, with each image giving a distinct, partial view of the imaged anatomy. In this work, we propose a novel method to fuse the partially imaged fetal head anatomy, acquired from numerous views, into a single coherent 3D volume of the full anatomy. Firstly, a stream of freehand 3D US images is acquired using a single probe, capturing as many different views of the head as possible. The imaged anatomy at each time-point is then independently aligned to a canonical pose using a recurrent spatial transformer network, making our approach robust to fast fetal and probe motion. Secondly, images are fused by averaging only the most consistent and salient features from all images, producing a more detailed compounding, while minimising artefacts. We evaluated our method quantitatively and qualitatively, using image quality metrics and expert ratings, yielding state of the art performance in terms of image quality and robustness to misalignments. Being online, fast and fully automated, our method shows promise for clinical use and deployment as a real-time tool in the fetal screening clinic, where it may enable unparallelled insight into the shape and structure of the face, skull and brain.
Collapse
Affiliation(s)
- Robert Wright
- School of Biomedical Engineering & Imaging Sciences, King's College London, UK.
| | - Alberto Gomez
- School of Biomedical Engineering & Imaging Sciences, King's College London, UK
| | - Veronika A Zimmer
- School of Biomedical Engineering & Imaging Sciences, King's College London, UK; Department of Informatics, Technische Universität München, Germany
| | | | - Bishesh Khanal
- School of Biomedical Engineering & Imaging Sciences, King's College London, UK; Nepal Applied Mathematics and Informatics Institute for Research (NAAMII), Nepal
| | - Jacqueline Matthew
- School of Biomedical Engineering & Imaging Sciences, King's College London, UK
| | - Emily Skelton
- School of Biomedical Engineering & Imaging Sciences, King's College London, UK; School of Health Sciences, City, University of London, London, UK
| | | | - Daniel Rueckert
- Department of Computing, Imperial College London, UK; School of Medicine and Department of Informatics, Technische Universität München, Germany
| | - Joseph V Hajnal
- School of Biomedical Engineering & Imaging Sciences, King's College London, UK.
| | - Julia A Schnabel
- School of Biomedical Engineering & Imaging Sciences, King's College London, UK; Department of Informatics, Technische Universität München, Germany; Helmholtz Zentrum München - German Research Center for Environmental Health, Germany.
| |
Collapse
|
10
|
Horgan R, Nehme L, Abuhamad A. Artificial intelligence in obstetric ultrasound: A scoping review. Prenat Diagn 2023; 43:1176-1219. [PMID: 37503802 DOI: 10.1002/pd.6411] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 06/05/2023] [Accepted: 07/17/2023] [Indexed: 07/29/2023]
Abstract
The objective is to summarize the current use of artificial intelligence (AI) in obstetric ultrasound. PubMed, Cochrane Library, and ClinicalTrials.gov databases were searched using the following keywords "neural networks", OR "artificial intelligence", OR "machine learning", OR "deep learning", AND "obstetrics", OR "obstetrical", OR "fetus", OR "foetus", OR "fetal", OR "foetal", OR "pregnancy", or "pregnant", AND "ultrasound" from inception through May 2022. The search was limited to the English language. Studies were eligible for inclusion if they described the use of AI in obstetric ultrasound. Obstetric ultrasound was defined as the process of obtaining ultrasound images of a fetus, amniotic fluid, or placenta. AI was defined as the use of neural networks, machine learning, or deep learning methods. The authors' search identified a total of 127 papers that fulfilled our inclusion criteria. The current uses of AI in obstetric ultrasound include first trimester pregnancy ultrasound, assessment of placenta, fetal biometry, fetal echocardiography, fetal neurosonography, assessment of fetal anatomy, and other uses including assessment of fetal lung maturity and screening for risk of adverse pregnancy outcomes. AI holds the potential to improve the ultrasound efficiency, pregnancy outcomes in low resource settings, detection of congenital malformations and prediction of adverse pregnancy outcomes.
Collapse
Affiliation(s)
- Rebecca Horgan
- Division of Maternal Fetal Medicine, Department of Obstetrics & Gynecology, Eastern Virginia Medical School, Norfolk, Virginia, USA
| | - Lea Nehme
- Division of Maternal Fetal Medicine, Department of Obstetrics & Gynecology, Eastern Virginia Medical School, Norfolk, Virginia, USA
| | - Alfred Abuhamad
- Division of Maternal Fetal Medicine, Department of Obstetrics & Gynecology, Eastern Virginia Medical School, Norfolk, Virginia, USA
| |
Collapse
|
11
|
Zheng JQ, Lim NH, Papież BW. Accurate volume alignment of arbitrarily oriented tibiae based on a mutual attention network for osteoarthritis analysis. Comput Med Imaging Graph 2023; 106:102204. [PMID: 36863214 DOI: 10.1016/j.compmedimag.2023.102204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Revised: 02/14/2023] [Accepted: 02/14/2023] [Indexed: 02/26/2023]
Abstract
Damage to cartilage is an important indicator of osteoarthritis progression, but manual extraction of cartilage morphology is time-consuming and prone to error. To address this, we hypothesize that automatic labeling of cartilage can be achieved through the comparison of contrasted and non-contrasted Computer Tomography (CT). However, this is non-trivial as the pre-clinical volumes are at arbitrary starting poses due to the lack of standardized acquisition protocols. Thus, we propose an annotation-free deep learning method, D-net, for accurate and automatic alignment of pre- and post-contrasted cartilage CT volumes. D-Net is based on a novel mutual attention network structure to capture large-range translation and full-range rotation without the need for a prior pose template. CT volumes of mice tibiae are used for validation, with synthetic transformation for training and tested with real pre- and post-contrasted CT volumes. Analysis of Variance (ANOVA) was used to compare the different network structures. Our proposed method, D-net, achieves a Dice coefficient of 0.87, and significantly outperforms other state-of-the-art deep learning models, in the real-world alignment of 50 pairs of pre- and post-contrasted CT volumes when cascaded as a multi-stage network.
Collapse
Affiliation(s)
- Jian-Qing Zheng
- The Kennedy Institute of Rheumatology, University of Oxford, UK.
| | - Ngee Han Lim
- The Kennedy Institute of Rheumatology, University of Oxford, UK.
| | | |
Collapse
|
12
|
Belciug S, Gabriel Iliescu D. Deep Learning and Gaussian Mixture Modelling clustering mix. A new approach for fetal morphology view plane differentiation. J Biomed Inform 2023; 143:104402. [PMID: 37217028 DOI: 10.1016/j.jbi.2023.104402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Revised: 05/05/2023] [Accepted: 05/19/2023] [Indexed: 05/24/2023]
Abstract
The last three years have been a game changer in the way medicine is practiced. The COVID-19 pandemic changed the obstetrics and gynecology scenery. Pregnancy complications, and even death, are preventable due to maternal-fetal monitoring. A fast and accurate diagnosis can be established by a doctor + Artificial Intelligence combo. The aim of this paper is to propose a framework designed as a merger between Deep learning algorithms and Gaussian Mixture Modelling clustering applied in differentiating between the view planes of a second trimester fetal morphology scan. The deep learning methods chosen for this approach were ResNet50, DenseNet121, InceptionV3, EfficientNetV2S, MobileNetV3Large, and Xception. The framework establishes a hierarchy of the component networks using a statistical fitness function and the Gaussian Mixture Modelling clustering method, followed by a synergetic weighted vote of the algorithms that gives the final decision. We have tested the framework on two second trimester morphology scan datasets. A thorough statistical benchmarking process has been provided to validate our results. The experimental results showed that the synergetic vote of the framework outperforms the vote of each stand-alone deep learning network, hard voting, soft voting, and bagging strategy.
Collapse
Affiliation(s)
- Smaranda Belciug
- Department of Computer Science, Faculty of Sciences, University of Craiova, Craiova, 200585, Romania.
| | - Dominic Gabriel Iliescu
- Department of Computer Science, Faculty of Sciences, University of Craiova, Craiova, 200585, Romania; Department no. 2, University of Medicine and Pharmacy of Craiova, Romania.
| |
Collapse
|
13
|
Bastiaansen WAP, Klein S, Koning AHJ, Niessen WJ, Steegers-Theunissen RPM, Rousian M. Computational methods for the analysis of early-pregnancy brain ultrasonography: a systematic review. EBioMedicine 2023; 89:104466. [PMID: 36796233 PMCID: PMC9958260 DOI: 10.1016/j.ebiom.2023.104466] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 01/09/2023] [Accepted: 01/23/2023] [Indexed: 02/16/2023] Open
Abstract
BACKGROUND Early screening of the brain is becoming routine clinical practice. Currently, this screening is performed by manual measurements and visual analysis, which is time-consuming and prone to errors. Computational methods may support this screening. Hence, the aim of this systematic review is to gain insight into future research directions needed to bring automated early-pregnancy ultrasound analysis of the human brain to clinical practice. METHODS We searched PubMed (Medline ALL Ovid), EMBASE, Web of Science Core Collection, Cochrane Central Register of Controlled Trials, and Google Scholar, from inception until June 2022. This study is registered in PROSPERO at CRD42020189888. Studies about computational methods for the analysis of human brain ultrasonography acquired before the 20th week of pregnancy were included. The key reported attributes were: level of automation, learning-based or not, the usage of clinical routine data depicting normal and abnormal brain development, public sharing of program source code and data, and analysis of the confounding factors. FINDINGS Our search identified 2575 studies, of which 55 were included. 76% used an automatic method, 62% a learning-based method, 45% used clinical routine data and in addition, for 13% the data depicted abnormal development. None of the studies shared publicly the program source code and only two studies shared the data. Finally, 35% did not analyse the influence of confounding factors. INTERPRETATION Our review showed an interest in automatic, learning-based methods. To bring these methods to clinical practice we recommend that studies: use routine clinical data depicting both normal and abnormal development, make their dataset and program source code publicly available, and be attentive to the influence of confounding factors. Introduction of automated computational methods for early-pregnancy brain ultrasonography will save valuable time during screening, and ultimately lead to better detection, treatment and prevention of neuro-developmental disorders. FUNDING The Erasmus MC Medical Research Advisor Committee (grant number: FB 379283).
Collapse
Affiliation(s)
- Wietske A P Bastiaansen
- Department of Obstetrics and Gynecology, Erasmus MC, University Medical Center, Rotterdam, the Netherlands; Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, University Medical Center, Rotterdam, the Netherlands
| | - Stefan Klein
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, University Medical Center, Rotterdam, the Netherlands
| | - Anton H J Koning
- Department of Pathology, Erasmus MC, University Medical Center, Rotterdam, the Netherlands
| | - Wiro J Niessen
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, University Medical Center, Rotterdam, the Netherlands
| | | | - Melek Rousian
- Department of Obstetrics and Gynecology, Erasmus MC, University Medical Center, Rotterdam, the Netherlands.
| |
Collapse
|
14
|
Zhao Y, Wang X, Che T, Bao G, Li S. Multi-task deep learning for medical image computing and analysis: A review. Comput Biol Med 2023; 153:106496. [PMID: 36634599 DOI: 10.1016/j.compbiomed.2022.106496] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 12/06/2022] [Accepted: 12/27/2022] [Indexed: 12/29/2022]
Abstract
The renaissance of deep learning has provided promising solutions to various tasks. While conventional deep learning models are constructed for a single specific task, multi-task deep learning (MTDL) that is capable to simultaneously accomplish at least two tasks has attracted research attention. MTDL is a joint learning paradigm that harnesses the inherent correlation of multiple related tasks to achieve reciprocal benefits in improving performance, enhancing generalizability, and reducing the overall computational cost. This review focuses on the advanced applications of MTDL for medical image computing and analysis. We first summarize four popular MTDL network architectures (i.e., cascaded, parallel, interacted, and hybrid). Then, we review the representative MTDL-based networks for eight application areas, including the brain, eye, chest, cardiac, abdomen, musculoskeletal, pathology, and other human body regions. While MTDL-based medical image processing has been flourishing and demonstrating outstanding performance in many tasks, in the meanwhile, there are performance gaps in some tasks, and accordingly we perceive the open challenges and the perspective trends. For instance, in the 2018 Ischemic Stroke Lesion Segmentation challenge, the reported top dice score of 0.51 and top recall of 0.55 achieved by the cascaded MTDL model indicate further research efforts in high demand to escalate the performance of current models.
Collapse
Affiliation(s)
- Yan Zhao
- Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
| | - Xiuying Wang
- School of Computer Science, The University of Sydney, Sydney, NSW, 2008, Australia.
| | - Tongtong Che
- Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
| | - Guoqing Bao
- School of Computer Science, The University of Sydney, Sydney, NSW, 2008, Australia
| | - Shuyu Li
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, 100875, China.
| |
Collapse
|
15
|
Sarno L, Neola D, Carbone L, Saccone G, Carlea A, Miceli M, Iorio GG, Mappa I, Rizzo G, Girolamo RD, D'Antonio F, Guida M, Maruotti GM. Use of artificial intelligence in obstetrics: not quite ready for prime time. Am J Obstet Gynecol MFM 2023; 5:100792. [PMID: 36356939 DOI: 10.1016/j.ajogmf.2022.100792] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Revised: 10/18/2022] [Accepted: 10/28/2022] [Indexed: 11/09/2022]
Abstract
Artificial intelligence is finding several applications in healthcare settings. This study aimed to report evidence on the effectiveness of artificial intelligence application in obstetrics. Through a narrative review of literature, we described artificial intelligence use in different obstetrical areas as follows: prenatal diagnosis, fetal heart monitoring, prediction and management of pregnancy-related complications (preeclampsia, preterm birth, gestational diabetes mellitus, and placenta accreta spectrum), and labor. Artificial intelligence seems to be a promising tool to help clinicians in daily clinical activity. The main advantages that emerged from this review are related to the reduction of inter- and intraoperator variability, time reduction of procedures, and improvement of overall diagnostic performance. However, nowadays, the diffusion of these systems in routine clinical practice raises several issues. Reported evidence is still very limited, and further studies are needed to confirm the clinical applicability of artificial intelligence. Moreover, better training of clinicians designed to use these systems should be ensured, and evidence-based guidelines regarding this topic should be produced to enhance the strengths of artificial systems and minimize their limits.
Collapse
Affiliation(s)
- Laura Sarno
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Daniele Neola
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida).
| | - Luigi Carbone
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Gabriele Saccone
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Annunziata Carlea
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Marco Miceli
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida); CEINGE Biotecnologie Avanzate, Naples, Italy (Dr Miceli)
| | - Giuseppe Gabriele Iorio
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Ilenia Mappa
- Division of Maternal Fetal Medicine, Department of Obstetrics and Gynecology, University of Rome Tor Vergata, Rome, Italy (Dr Mappa and Dr Rizzo)
| | - Giuseppe Rizzo
- Division of Maternal Fetal Medicine, Department of Obstetrics and Gynecology, University of Rome Tor Vergata, Rome, Italy (Dr Mappa and Dr Rizzo)
| | - Raffaella Di Girolamo
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Francesco D'Antonio
- Center for Fetal Care and High Risk Pregnancy, Department of Obstetrics and Gynecology, University G. D'Annunzio of Chieti-Pescara, Chieti, Italy (Dr D'Antonio)
| | - Maurizio Guida
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Giuseppe Maria Maruotti
- Gynecology and Obstetrics Unit, Department of Public Health, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Maruotti)
| |
Collapse
|
16
|
Fiorentino MC, Villani FP, Di Cosmo M, Frontoni E, Moccia S. A review on deep-learning algorithms for fetal ultrasound-image analysis. Med Image Anal 2023; 83:102629. [PMID: 36308861 DOI: 10.1016/j.media.2022.102629] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 07/12/2022] [Accepted: 09/10/2022] [Indexed: 11/07/2022]
Abstract
Deep-learning (DL) algorithms are becoming the standard for processing ultrasound (US) fetal images. A number of survey papers in the field is today available, but most of them are focusing on a broader area of medical-image analysis or not covering all fetal US DL applications. This paper surveys the most recent work in the field, with a total of 153 research papers published after 2017. Papers are analyzed and commented from both the methodology and the application perspective. We categorized the papers into (i) fetal standard-plane detection, (ii) anatomical structure analysis and (iii) biometry parameter estimation. For each category, main limitations and open issues are presented. Summary tables are included to facilitate the comparison among the different approaches. In addition, emerging applications are also outlined. Publicly-available datasets and performance metrics commonly used to assess algorithm performance are summarized, too. This paper ends with a critical summary of the current state of the art on DL algorithms for fetal US image analysis and a discussion on current challenges that have to be tackled by researchers working in the field to translate the research methodology into actual clinical practice.
Collapse
Affiliation(s)
| | | | - Mariachiara Di Cosmo
- Department of Information Engineering, Università Politecnica delle Marche, Italy
| | - Emanuele Frontoni
- Department of Information Engineering, Università Politecnica delle Marche, Italy; Department of Political Sciences, Communication and International Relations, Università degli Studi di Macerata, Italy
| | - Sara Moccia
- The BioRobotics Institute and Department of Excellence in Robotics & AI, Scuola Superiore Sant'Anna, Italy
| |
Collapse
|
17
|
Caspi Y, de Zwarte SMC, Iemenschot IJ, Lumbreras R, de Heus R, Bekker MN, Hulshoff Pol H. Automatic measurements of fetal intracranial volume from 3D ultrasound scans. FRONTIERS IN NEUROIMAGING 2022; 1:996702. [PMID: 37555155 PMCID: PMC10406279 DOI: 10.3389/fnimg.2022.996702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Accepted: 09/15/2022] [Indexed: 08/10/2023]
Abstract
Three-dimensional fetal ultrasound is commonly used to study the volumetric development of brain structures. To date, only a limited number of automatic procedures for delineating the intracranial volume exist. Hence, intracranial volume measurements from three-dimensional ultrasound images are predominantly performed manually. Here, we present and validate an automated tool to extract the intracranial volume from three-dimensional fetal ultrasound scans. The procedure is based on the registration of a brain model to a subject brain. The intracranial volume of the subject is measured by applying the inverse of the final transformation to an intracranial mask of the brain model. The automatic measurements showed a high correlation with manual delineation of the same subjects at two gestational ages, namely, around 20 and 30 weeks (linear fitting R2(20 weeks) = 0.88, R2(30 weeks) = 0.77; Intraclass Correlation Coefficients: 20 weeks=0.94, 30 weeks = 0.84). Overall, the automatic intracranial volumes were larger than the manually delineated ones (84 ± 16 vs. 76 ± 15 cm3; and 274 ± 35 vs. 237 ± 28 cm3), probably due to differences in cerebellum delineation. Notably, the automated measurements reproduced both the non-linear pattern of fetal brain growth and the increased inter-subject variability for older fetuses. By contrast, there was some disagreement between the manual and automatic delineation concerning the size of sexual dimorphism differences. The method presented here provides a relatively efficient way to delineate volumes of fetal brain structures like the intracranial volume automatically. It can be used as a research tool to investigate these structures in large cohorts, which will ultimately aid in understanding fetal structural human brain development.
Collapse
Affiliation(s)
- Yaron Caspi
- Department of Psychiatry, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, Netherlands
| | - Sonja M. C. de Zwarte
- Department of Psychiatry, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, Netherlands
| | - Iris J. Iemenschot
- Department of Psychiatry, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, Netherlands
| | - Raquel Lumbreras
- Department of Psychiatry, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, Netherlands
| | - Roel de Heus
- Department of Obstetrics and Gynaecology, St. Antonius Hospital, Utrecht, Netherlands
- Department of Obstetrics, University Medical Center Utrecht, Utrecht, Netherlands
| | - Mireille N. Bekker
- Department of Obstetrics, University Medical Center Utrecht, Utrecht, Netherlands
| | - Hilleke Hulshoff Pol
- Department of Psychiatry, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, Netherlands
- Department of Psychology, Utrecht University, Utrecht, Netherlands
| |
Collapse
|
18
|
Zimmer VA, Gomez A, Skelton E, Wright R, Wheeler G, Deng S, Ghavami N, Lloyd K, Matthew J, Kainz B, Rueckert D, Hajnal JV, Schnabel JA. Placenta segmentation in ultrasound imaging: Addressing sources of uncertainty and limited field-of-view. Med Image Anal 2022; 83:102639. [PMID: 36257132 PMCID: PMC7614009 DOI: 10.1016/j.media.2022.102639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Revised: 03/09/2022] [Accepted: 09/15/2022] [Indexed: 02/04/2023]
Abstract
Automatic segmentation of the placenta in fetal ultrasound (US) is challenging due to the (i) high diversity of placenta appearance, (ii) the restricted quality in US resulting in highly variable reference annotations, and (iii) the limited field-of-view of US prohibiting whole placenta assessment at late gestation. In this work, we address these three challenges with a multi-task learning approach that combines the classification of placental location (e.g., anterior, posterior) and semantic placenta segmentation in a single convolutional neural network. Through the classification task the model can learn from larger and more diverse datasets while improving the accuracy of the segmentation task in particular in limited training set conditions. With this approach we investigate the variability in annotations from multiple raters and show that our automatic segmentations (Dice of 0.86 for anterior and 0.83 for posterior placentas) achieve human-level performance as compared to intra- and inter-observer variability. Lastly, our approach can deliver whole placenta segmentation using a multi-view US acquisition pipeline consisting of three stages: multi-probe image acquisition, image fusion and image segmentation. This results in high quality segmentation of larger structures such as the placenta in US with reduced image artifacts which are beyond the field-of-view of single probes.
Collapse
Affiliation(s)
- Veronika A. Zimmer
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom,Faculty of Informatics, Technical University of Munich, Germany,Corresponding author at: School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom. , (V.A. Zimmer)
| | - Alberto Gomez
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Emily Skelton
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom,School of Health Sciences, City, University of London, London, United Kingdom
| | - Robert Wright
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Gavin Wheeler
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Shujie Deng
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Nooshin Ghavami
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Karen Lloyd
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Jacqueline Matthew
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Bernhard Kainz
- BioMedIA group, Imperial College London, London, United Kingdom,FAU Erlangen-Nürnberg Germany
| | - Daniel Rueckert
- Faculty of Informatics, Technical University of Munich, Germany,BioMedIA group, Imperial College London, London, United Kingdom
| | - Joseph V. Hajnal
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Julia A. Schnabel
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom,Faculty of Informatics, Technical University of Munich, Germany,Helmholtz Center Munich, Germany
| |
Collapse
|
19
|
A hybrid hemodynamic knowledge-powered and feature reconstruction-guided scheme for breast cancer segmentation based on DCE-MRI. Med Image Anal 2022; 82:102572. [PMID: 36055051 DOI: 10.1016/j.media.2022.102572] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Revised: 07/08/2022] [Accepted: 08/11/2022] [Indexed: 11/24/2022]
Abstract
Automatically and accurately annotating tumor in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI), which provides a noninvasive in vivo method to evaluate tumor vasculature architectures based on contrast accumulation and washout, is a crucial step in computer-aided breast cancer diagnosis and treatment. However, it remains challenging due to the varying sizes, shapes, appearances and densities of tumors caused by the high heterogeneity of breast cancer, and the high dimensionality and ill-posed artifacts of DCE-MRI. In this paper, we propose a hybrid hemodynamic knowledge-powered and feature reconstruction-guided scheme that integrates pharmacokinetics prior and feature refinement to generate sufficiently adequate features in DCE-MRI for breast cancer segmentation. The pharmacokinetics prior expressed by time intensity curve (TIC) is incorporated into the scheme through objective function called dynamic contrast-enhanced prior (DCP) loss. It contains contrast agent kinetic heterogeneity prior knowledge, which is important to optimize our model parameters. Besides, we design a spatial fusion module (SFM) embedded in the scheme to exploit intra-slices spatial structural correlations, and deploy a spatial-kinetic fusion module (SKFM) to effectively leverage the complementary information extracted from spatial-kinetic space. Furthermore, considering that low spatial resolution often leads to poor image quality in DCE-MRI, we integrate a reconstruction autoencoder into the scheme to refine feature maps in an unsupervised manner. We conduct extensive experiments to validate the proposed method and show that our approach can outperform recent state-of-the-art segmentation methods on breast cancer DCE-MRI dataset. Moreover, to explore the generalization for other segmentation tasks on dynamic imaging, we also extend the proposed method to brain segmentation in DSC-MRI sequence. Our source code will be released on https://github.com/AI-medical-diagnosis-team-of-JNU/DCEDuDoFNet.
Collapse
|
20
|
Belciug S. Learning deep neural networks' architectures using differential evolution. Case study: Medical imaging processing. Comput Biol Med 2022; 146:105623. [PMID: 35751202 PMCID: PMC9112664 DOI: 10.1016/j.compbiomed.2022.105623] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2021] [Revised: 03/03/2022] [Accepted: 03/31/2022] [Indexed: 02/07/2023]
Abstract
The COVID-19 pandemic has changed the way we practice medicine. Cancer patient and obstetric care landscapes have been distorted. Delaying cancer diagnosis or maternal-fetal monitoring increased the number of preventable deaths or pregnancy complications. One solution is using Artificial Intelligence to help the medical personnel establish the diagnosis in a faster and more accurate manner. Deep learning is the state-of-the-art solution for image classification. Researchers manually design the structure of fix deep learning neural networks structures and afterwards verify their performance. The goal of this paper is to propose a potential method for learning deep network architectures automatically. As the number of networks architectures increases exponentially with the number of convolutional layers in the network, we propose a differential evolution algorithm to traverse the search space. At first, we propose a way to encode the network structure as a candidate solution of fixed-length integer array, followed by the initialization of differential evolution method. A set of random individuals is generated, followed by mutation, recombination, and selection. At each generation the individuals with the poorest loss values are eliminated and replaced with more competitive individuals. The model has been tested on three cancer datasets containing MRI scans and histopathological images and two maternal-fetal screening ultrasound images. The novel proposed method has been compared and statistically benchmarked to four state-of-the-art deep learning networks: VGG16, ResNet50, Inception V3, and DenseNet169. The experimental results showed that the model is competitive to other state-of-the-art models, obtaining accuracies between 78.73% and 99.50% depending on the dataset it had been applied on.
Collapse
|
21
|
Alzubaidi M, Agus M, Alyafei K, Althelaya KA, Shah U, Abd-Alrazaq AA, Anbar M, Makhlouf M, Househ M. Towards deep observation: A systematic survey on artificial intelligence techniques to monitor fetus via Ultrasound Images. iScience 2022; 25:104713. [PMID: 35856024 PMCID: PMC9287600 DOI: 10.1016/j.isci.2022.104713] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 06/09/2022] [Accepted: 06/28/2022] [Indexed: 11/26/2022] Open
Abstract
Several reviews have been conducted regarding artificial intelligence (AI) techniques to improve pregnancy outcomes. But they are not focusing on ultrasound images. This survey aims to explore how AI can assist with fetal growth monitoring via ultrasound image. We reported our findings using the guidelines for PRISMA. We conducted a comprehensive search of eight bibliographic databases. Out of 1269 studies 107 are included. We found that 2D ultrasound images were more popular (88) than 3D and 4D ultrasound images (19). Classification is the most used method (42), followed by segmentation (31), classification integrated with segmentation (16) and other miscellaneous methods such as object-detection, regression, and reinforcement learning (18). The most common areas that gained traction within the pregnancy domain were the fetus head (43), fetus body (31), fetus heart (13), fetus abdomen (10), and the fetus face (10). This survey will promote the development of improved AI models for fetal clinical applications. Artificial intelligence studies to monitor fetal development via ultrasound images Fetal issues categorized based on four categories — general, head, heart, face, abdomen The most used AI techniques are classification, segmentation, object detection, and RL The research and practical implications are included.
Collapse
|
22
|
Moser F, Huang R, Papież BW, Namburete AIL. BEAN: Brain Extraction and Alignment Network for 3D Fetal Neurosonography. Neuroimage 2022; 258:119341. [PMID: 35654376 DOI: 10.1016/j.neuroimage.2022.119341] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Revised: 04/08/2022] [Accepted: 05/28/2022] [Indexed: 01/18/2023] Open
Abstract
Brain extraction (masking of extra-cerebral tissues) and alignment are fundamental first steps of most neuroimage analysis pipelines. The lack of automated solutions for 3D ultrasound (US) has therefore limited its potential as a neuroimaging modality for studying fetal brain development using routinely acquired scans. In this work, we propose a convolutional neural network (CNN) that accurately and consistently aligns and extracts the fetal brain from minimally pre-processed 3D US scans. Our multi-task CNN, Brain Extraction and Alignment Network (BEAN), consists of two independent branches: 1) a fully-convolutional encoder-decoder branch for brain extraction of unaligned scans, and 2) a two-step regression-based branch for similarity alignment of the brain to a common coordinate space. BEAN was tested on 356 fetal head 3D scans spanning the gestational range of 14 to 30 weeks, significantly outperforming all current alternatives for fetal brain extraction and alignment. BEAN achieved state-of-the-art performance for both tasks, with a mean Dice Similarity Coefficient (DSC) of 0.94 for the brain extraction masks, and a mean DSC of 0.93 for the alignment of the target brain masks. The presented experimental results show that brain structures such as the thalamus, choroid plexus, cavum septum pellucidum, and Sylvian fissure, are consistently aligned throughout the dataset and remain clearly visible when the scans are averaged together. The BEAN implementation and related code can be found under www.github.com/felipemoser/kelluwen.
Collapse
Affiliation(s)
- Felipe Moser
- Oxford Machine Learning in Neuroimaging laboratory, OMNI, Department of Computer Science, University of Oxford, Oxford, UK.
| | - Ruobing Huang
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| | -
- Nuffield Department of Women's and Reproductive Health, John Radcliffe Hospital, University of Oxford, Oxford, UK
| | - Bartłomiej W Papież
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK; Big Data Institute, Li Ka Shing Centre for Health Information and Discovery, University of Oxford, Oxford, UK
| | - Ana I L Namburete
- Oxford Machine Learning in Neuroimaging laboratory, OMNI, Department of Computer Science, University of Oxford, Oxford, UK; Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, United Kingdom
| |
Collapse
|
23
|
Deep learning-based plane pose regression in obstetric ultrasound. Int J Comput Assist Radiol Surg 2022; 17:833-839. [PMID: 35489005 PMCID: PMC9110476 DOI: 10.1007/s11548-022-02609-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Accepted: 03/10/2022] [Indexed: 01/16/2023]
Abstract
Purpose In obstetric ultrasound (US) scanning, the learner’s ability to mentally build a three-dimensional (3D) map of the fetus from a two-dimensional (2D) US image represents a major challenge in skill acquisition. We aim to build a US plane localisation system for 3D visualisation, training, and guidance without integrating additional sensors. Methods We propose a regression convolutional neural network (CNN) using image features to estimate the six-dimensional pose of arbitrarily oriented US planes relative to the fetal brain centre. The network was trained on synthetic images acquired from phantom 3D US volumes and fine-tuned on real scans. Training data was generated by slicing US volumes into imaging planes in Unity at random coordinates and more densely around the standard transventricular (TV) plane. Results With phantom data, the median errors are 0.90 mm/1.17\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$^\circ $$\end{document}∘ and 0.44 mm/1.21\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$^\circ $$\end{document}∘ for random planes and planes close to the TV one, respectively. With real data, using a different fetus with the same gestational age (GA), these errors are 11.84 mm/25.17\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$^\circ $$\end{document}∘. The average inference time is 2.97 ms per plane. Conclusion The proposed network reliably localises US planes within the fetal brain in phantom data and successfully generalises pose regression for an unseen fetal brain from a similar GA as in training. Future development will expand the prediction to volumes of the whole fetus and assess its potential for vision-based, freehand US-assisted navigation when acquiring standard fetal planes. Supplementary Information The online version contains supplementary material available at 10.1007/s11548-022-02609-z.
Collapse
|
24
|
Hesse LS, Aliasi M, Moser F, theINTERGROWTH-Twenty First Consortium, Haak MC, Xie W, Jenkinson M, Namburete AIL. Subcortical Segmentation of the Fetal Brain in 3D Ultrasound using Deep Learning. Neuroimage 2022; 254:119117. [PMID: 35331871 DOI: 10.1016/j.neuroimage.2022.119117] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Revised: 02/25/2022] [Accepted: 03/17/2022] [Indexed: 12/24/2022] Open
Abstract
The quantification of subcortical volume development from 3D fetal ultrasound can provide important diagnostic information during pregnancy monitoring. However, manual segmentation of subcortical structures in ultrasound volumes is time-consuming and challenging due to low soft tissue contrast, speckle and shadowing artifacts. For this reason, we developed a convolutional neural network (CNN) for the automated segmentation of the choroid plexus (CP), lateral posterior ventricle horns (LPVH), cavum septum pellucidum et vergae (CSPV), and cerebellum (CB) from 3D ultrasound. As ground-truth labels are scarce and expensive to obtain, we applied few-shot learning, in which only a small number of manual annotations (n = 9) are used to train a CNN. We compared training a CNN with only a few individually annotated volumes versus many weakly labelled volumes obtained from atlas-based segmentations. This showed that segmentation performance close to intra-observer variability can be obtained with only a handful of manual annotations. Finally, the trained models were applied to a large number (n = 278) of ultrasound image volumes of a diverse, healthy population, obtaining novel US-specific growth curves of the respective structures during the second trimester of gestation.
Collapse
Affiliation(s)
- Linde S Hesse
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, United Kingdom.
| | - Moska Aliasi
- Department of Obstetrics and Fetal Medicine, Leiden University Medical Center, The Netherlands
| | - Felipe Moser
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, United Kingdom
| | - theINTERGROWTH-Twenty First Consortium
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, United Kingdom; Department of Obstetrics and Fetal Medicine, Leiden University Medical Center, The Netherlands; Visual Geometry Group, Department of Engineering Science, University of Oxford, United Kingdom; Wellcome center for Integrative NeuroImaging, FMRIB, University of Oxford, United Kingdom; Australian Institute for Machine Learning (AIML), Australia; South Australian Health and Medical Research Institute (SAHMRI), Australia
| | - Monique C Haak
- Department of Obstetrics and Fetal Medicine, Leiden University Medical Center, The Netherlands
| | - Weidi Xie
- Visual Geometry Group, Department of Engineering Science, University of Oxford, United Kingdom
| | - Mark Jenkinson
- Wellcome center for Integrative NeuroImaging, FMRIB, University of Oxford, United Kingdom; Australian Institute for Machine Learning (AIML), Australia; South Australian Health and Medical Research Institute (SAHMRI), Australia
| | - Ana I L Namburete
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, United Kingdom
| |
Collapse
|
25
|
Torres HR, Morais P, Oliveira B, Birdir C, Rüdiger M, Fonseca JC, Vilaça JL. A review of image processing methods for fetal head and brain analysis in ultrasound images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 215:106629. [PMID: 35065326 DOI: 10.1016/j.cmpb.2022.106629] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Revised: 12/20/2021] [Accepted: 01/08/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Examination of head shape and brain during the fetal period is paramount to evaluate head growth, predict neurodevelopment, and to diagnose fetal abnormalities. Prenatal ultrasound is the most used imaging modality to perform this evaluation. However, manual interpretation of these images is challenging and thus, image processing methods to aid this task have been proposed in the literature. This article aims to present a review of these state-of-the-art methods. METHODS In this work, it is intended to analyze and categorize the different image processing methods to evaluate fetal head and brain in ultrasound imaging. For that, a total of 109 articles published since 2010 were analyzed. Different applications are covered in this review, namely analysis of head shape and inner structures of the brain, standard clinical planes identification, fetal development analysis, and methods for image processing enhancement. RESULTS For each application, the reviewed techniques are categorized according to their theoretical approach, and the more suitable image processing methods to accurately analyze the head and brain are identified. Furthermore, future research needs are discussed. Finally, topics whose research is lacking in the literature are outlined, along with new fields of applications. CONCLUSIONS A multitude of image processing methods has been proposed for fetal head and brain analysis. Summarily, techniques from different categories showed their potential to improve clinical practice. Nevertheless, further research must be conducted to potentiate the current methods, especially for 3D imaging analysis and acquisition and for abnormality detection.
Collapse
Affiliation(s)
- Helena R Torres
- Algoritmi Center, School of Engineering, University of Minho, Guimarães, Portugal; Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal; ICVS/3B's - PT Government Associate Laboratory, Braga/Guimarães, Portugal; 2Ai - School of Technology, IPCA, Barcelos, Portugal.
| | - Pedro Morais
- 2Ai - School of Technology, IPCA, Barcelos, Portugal
| | - Bruno Oliveira
- Algoritmi Center, School of Engineering, University of Minho, Guimarães, Portugal; Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal; ICVS/3B's - PT Government Associate Laboratory, Braga/Guimarães, Portugal; 2Ai - School of Technology, IPCA, Barcelos, Portugal
| | - Cahit Birdir
- Department of Gynecology and Obstetrics, University Hospital Carl Gustav Carus, TU Dresden, Germany; Saxony Center for Feto-Neonatal Health, TU Dresden, Germany
| | - Mario Rüdiger
- Department for Neonatology and Pediatric Intensive Care, University Hospital Carl Gustav Carus, TU Dresden, Germany
| | - Jaime C Fonseca
- Algoritmi Center, School of Engineering, University of Minho, Guimarães, Portugal
| | - João L Vilaça
- 2Ai - School of Technology, IPCA, Barcelos, Portugal
| |
Collapse
|
26
|
Zhang Y, Li H, Du J, Qin J, Wang T, Chen Y, Liu B, Gao W, Ma G, Lei B. 3D Multi-Attention Guided Multi-Task Learning Network for Automatic Gastric Tumor Segmentation and Lymph Node Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1618-1631. [PMID: 33646948 DOI: 10.1109/tmi.2021.3062902] [Citation(s) in RCA: 41] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Automatic gastric tumor segmentation and lymph node (LN) classification not only can assist radiologists in reading images, but also provide image-guided clinical diagnosis and improve diagnosis accuracy. However, due to the inhomogeneous intensity distribution of gastric tumor and LN in CT scans, the ambiguous/missing boundaries, and highly variable shapes of gastric tumor, it is quite challenging to develop an automatic solution. To comprehensively address these challenges, we propose a novel 3D multi-attention guided multi-task learning network for simultaneous gastric tumor segmentation and LN classification, which makes full use of the complementary information extracted from different dimensions, scales, and tasks. Specifically, we tackle task correlation and heterogeneity with the convolutional neural network consisting of scale-aware attention-guided shared feature learning for refined and universal multi-scale features, and task-aware attention-guided feature learning for task-specific discriminative features. This shared feature learning is equipped with two types of scale-aware attention (visual attention and adaptive spatial attention) and two stage-wise deep supervision paths. The task-aware attention-guided feature learning comprises a segmentation-aware attention module and a classification-aware attention module. The proposed 3D multi-task learning network can balance all tasks by combining segmentation and classification loss functions with weight uncertainty. We evaluate our model on an in-house CT images dataset collected from three medical centers. Experimental results demonstrate that our method outperforms the state-of-the-art algorithms, and obtains promising performance for tumor segmentation and LN classification. Moreover, to explore the generalization for other segmentation tasks, we also extend the proposed network to liver tumor segmentation in CT images of the MICCAI 2017 Liver Tumor Segmentation Challenge. Our implementation is released at https://github.com/infinite-tao/MA-MTLN.
Collapse
|
27
|
Shen YT, Chen L, Yue WW, Xu HX. Artificial intelligence in ultrasound. Eur J Radiol 2021; 139:109717. [PMID: 33962110 DOI: 10.1016/j.ejrad.2021.109717] [Citation(s) in RCA: 52] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 03/28/2021] [Accepted: 04/11/2021] [Indexed: 12/13/2022]
Abstract
Ultrasound (US), a flexible green imaging modality, is expanding globally as a first-line imaging technique in various clinical fields following with the continual emergence of advanced ultrasonic technologies and the well-established US-based digital health system. Actually, in US practice, qualified physicians should manually collect and visually evaluate images for the detection, identification and monitoring of diseases. The diagnostic performance is inevitably reduced due to the intrinsic property of high operator-dependence from US. In contrast, artificial intelligence (AI) excels at automatically recognizing complex patterns and providing quantitative assessment for imaging data, showing high potential to assist physicians in acquiring more accurate and reproducible results. In this article, we will provide a general understanding of AI, machine learning (ML) and deep learning (DL) technologies; We then review the rapidly growing applications of AI-especially DL technology in the field of US-based on the following anatomical regions: thyroid, breast, abdomen and pelvis, obstetrics heart and blood vessels, musculoskeletal system and other organs by covering image quality control, anatomy localization, object detection, lesion segmentation, and computer-aided diagnosis and prognosis evaluation; Finally, we offer our perspective on the challenges and opportunities for the clinical practice of biomedical AI systems in US.
Collapse
Affiliation(s)
- Yu-Ting Shen
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, Ultrasound Research and Education Institute, Tongji University School of Medicine, Tongji University Cancer Center, Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, National Clnical Research Center of Interventional Medicine, Shanghai, 200072, PR China
| | - Liang Chen
- Department of Gastroenterology, Shanghai Tenth People's Hospital, Tongji University School of Medicine, Shanghai, 200072, PR China
| | - Wen-Wen Yue
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, Ultrasound Research and Education Institute, Tongji University School of Medicine, Tongji University Cancer Center, Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, National Clnical Research Center of Interventional Medicine, Shanghai, 200072, PR China.
| | - Hui-Xiong Xu
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, Ultrasound Research and Education Institute, Tongji University School of Medicine, Tongji University Cancer Center, Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, National Clnical Research Center of Interventional Medicine, Shanghai, 200072, PR China.
| |
Collapse
|
28
|
Song C, Gao T, Wang H, Sudirman S, Zhang W, Zhu H. The Classification and Segmentation of Fetal Anatomies Ultrasound Image: A Survey. JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS 2021. [DOI: 10.1166/jmihi.2021.3616] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Ultrasound imaging processing technology has been used in obstetric observation of the fetus and diagnosis of fetal diseases for more than half a century. It contains certain advantages and unique challenges which has been developed rapidly. From the perspective of ultrasound image analysis, at the very beginning, it is essential to determine fetal survival, gestational age and so on. Currently, the fetal anatomies ultrasound image analysis approaches have been studies and it has become an indispensable diagnostic tool for diagnosing fetal abnormalities, in order to gain more insight into the ongoing development of the fetus. Presently, it is the time to review previous approaches systematically in this field and to predict the directions of the future. Thus, this article reviews state-of-art approaches with the basic ideas, theories, pros and cons of ultrasound image technique for whole fetus with other anatomies. First of all, it summarizes the current pending problems and introduces the popular image processing methods, such as classification, segmentation etc. After that, the advantages and disadvantages in existing approaches as well as new research ideas are briefly discussed. Finally, the challenges and future trend are discussed.
Collapse
Affiliation(s)
- Chunlin Song
- State Key Laboratory of Software Development Environment, Beihang University, Beijing, 100191, China
| | - Tao Gao
- Obstetrics and Gynecology, Wuxi People’s Hospital, Wuxi, Jiangsu, 214023, China
| | - Hong Wang
- BOE Technology Group Co. Ltd., Beijing, 100176, China
| | - Sud Sudirman
- Department of Computer Science, Liverpool John Moores University, Liverpool, L3 3AF, UK
| | - Wei Zhang
- BOE Technology Group Co. Ltd., Beijing, 100176, China
| | - Haogang Zhu
- State Key Laboratory of Software Development Environment, Beihang University, Beijing, 100191, China
| |
Collapse
|
29
|
Yeung PH, Aliasi M, Papageorghiou AT, Haak M, Xie W, Namburete AIL. Learning to map 2D ultrasound images into 3D space with minimal human annotation. Med Image Anal 2021; 70:101998. [PMID: 33711741 DOI: 10.1016/j.media.2021.101998] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2020] [Revised: 01/26/2021] [Accepted: 02/01/2021] [Indexed: 10/22/2022]
Abstract
In fetal neurosonography, aligning two-dimensional (2D) ultrasound scans to their corresponding plane in the three-dimensional (3D) space remains a challenging task. In this paper, we propose a convolutional neural network that predicts the position of 2D ultrasound fetal brain scans in 3D atlas space. Instead of purely supervised learning that requires heavy annotations for each 2D scan, we train the model by sampling 2D slices from 3D fetal brain volumes, and target the model to predict the inverse of the sampling process, resembling the idea of self-supervised learning. We propose a model that takes a set of images as input, and learns to compare them in pairs. The pairwise comparison is weighted by the attention module based on its contribution to the prediction, which is learnt implicitly during training. The feature representation for each image is thus computed by incorporating the relative position information to all the other images in the set, and is later used for the final prediction. We benchmark our model on 2D slices sampled from 3D fetal brain volumes at 18-22 weeks' gestational age. Using three evaluation metrics, namely, Euclidean distance, plane angles and normalized cross correlation, which account for both the geometric and appearance discrepancy between the ground-truth and prediction, in all these metrics, our model outperforms a baseline model by as much as 23%, when the number of input images increases. We further demonstrate that our model generalizes to (i) real 2D standard transthalamic plane images, achieving comparable performance as human annotations, as well as (ii) video sequences of 2D freehand fetal brain scans.
Collapse
Affiliation(s)
- Pak-Hei Yeung
- Department of Engineering Science, Institute of Biomedical Engineering, University of Oxford, Oxford, United Kingdom.
| | - Moska Aliasi
- Division of Fetal Medicine, Department of Obstetrics, Leiden University Medical Center, 2333 ZA Leiden, The Netherlands
| | - Aris T Papageorghiou
- Nuffield Department of Obstetrics and Gynaecology, University of Oxford, Oxford, United Kingdom
| | - Monique Haak
- Division of Fetal Medicine, Department of Obstetrics, Leiden University Medical Center, 2333 ZA Leiden, The Netherlands
| | - Weidi Xie
- Department of Engineering Science, Institute of Biomedical Engineering, University of Oxford, Oxford, United Kingdom; Visual Geometry Group, Department of Engineering Science, University of Oxford, Oxford, United Kingdom
| | - Ana I L Namburete
- Department of Engineering Science, Institute of Biomedical Engineering, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
30
|
Zhang B, Liu H, Luo H, Li K. Automatic quality assessment for 2D fetal sonographic standard plane based on multitask learning. Medicine (Baltimore) 2021; 100:e24427. [PMID: 33530242 PMCID: PMC7850658 DOI: 10.1097/md.0000000000024427] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/14/2020] [Revised: 11/18/2020] [Accepted: 12/31/2020] [Indexed: 01/05/2023] Open
Abstract
ABSTRACT The quality control of fetal sonographic (FS) images is essential for the correct biometric measurements and fetal anomaly diagnosis. However, quality control requires professional sonographers to perform and is often labor-intensive. To solve this problem, we propose an automatic image quality assessment scheme based on multitask learning to assist in FS image quality control. An essential criterion for FS image quality control is that all the essential anatomical structures in the section should appear full and remarkable with a clear boundary. Therefore, our scheme aims to identify those essential anatomical structures to judge whether an FS image is the standard image, which is achieved by 3 convolutional neural networks. The Feature Extraction Network aims to extract deep level features of FS images. Based on the extracted features, the Class Prediction Network determines whether the structure meets the standard and Region Proposal Network identifies its position. The scheme has been applied to 3 types of fetal sections, which are the head, abdominal, and heart. The experimental results show that our method can make a quality assessment of an FS image within less a second. Also, our method achieves competitive performance in both the segmentation and diagnosis compared with state-of-the-art methods.
Collapse
Affiliation(s)
- Bo Zhang
- Department of Ultrasound, West China Second Hospital, Sichuan University/ Key Laboratory of Obstetrics & Gynecology, Pediatric Diseases, and Birth Defects of the Ministry of Education
| | - Han Liu
- Glasgow College, University of Electronic Science and Technology of China
| | - Hong Luo
- Department of Ultrasound, West China Second Hospital, Sichuan University/ Key Laboratory of Obstetrics & Gynecology, Pediatric Diseases, and Birth Defects of the Ministry of Education
| | - Kejun Li
- Wangwang Technology Company, Chengdu, China
| |
Collapse
|
31
|
Automatic Fetal Middle Sagittal Plane Detection in Ultrasound Using Generative Adversarial Network. Diagnostics (Basel) 2020; 11:diagnostics11010021. [PMID: 33374307 PMCID: PMC7824131 DOI: 10.3390/diagnostics11010021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2020] [Revised: 12/18/2020] [Accepted: 12/21/2020] [Indexed: 11/22/2022] Open
Abstract
Background and Objective: In the first trimester of pregnancy, fetal growth, and abnormalities can be assessed using the exact middle sagittal plane (MSP) of the fetus. However, the ultrasound (US) image quality and operator experience affect the accuracy. We present an automatic system that enables precise fetal MSP detection from three-dimensional (3D) US and provides an evaluation of its performance using a generative adversarial network (GAN) framework. Method: The neural network is designed as a filter and generates masks to obtain the MSP, learning the features and MSP location in 3D space. Using the proposed image analysis system, a seed point was obtained from 218 first-trimester fetal 3D US volumes using deep learning and the MSP was automatically extracted. Results: The experimental results reveal the feasibility and excellent performance of the proposed approach between the automatically and manually detected MSPs. There was no significant difference between the semi-automatic and automatic systems. Further, the inference time in the automatic system was up to two times faster than the semi-automatic approach. Conclusion: The proposed system offers precise fetal MSP measurements. Therefore, this automatic fetal MSP detection and measurement approach is anticipated to be useful clinically. The proposed system can also be applied to other relevant clinical fields in the future.
Collapse
|
32
|
Jiao J, Namburete AIL, Papageorghiou AT, Noble JA. Self-Supervised Ultrasound to MRI Fetal Brain Image Synthesis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:4413-4424. [PMID: 32833630 DOI: 10.1109/tmi.2020.3018560] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Fetal brain magnetic resonance imaging (MRI) offers exquisite images of the developing brain but is not suitable for second-trimester anomaly screening, for which ultrasound (US) is employed. Although expert sonographers are adept at reading US images, MR images which closely resemble anatomical images are much easier for non-experts to interpret. Thus in this article we propose to generate MR-like images directly from clinical US images. In medical image analysis such a capability is potentially useful as well, for instance for automatic US-MRI registration and fusion. The proposed model is end-to-end trainable and self-supervised without any external annotations. Specifically, based on an assumption that the US and MRI data share a similar anatomical latent space, we first utilise a network to extract the shared latent features, which are then used for MRI synthesis. Since paired data is unavailable for our study (and rare in practice), pixel-level constraints are infeasible to apply. We instead propose to enforce the distributions to be statistically indistinguishable, by adversarial learning in both the image domain and feature space. To regularise the anatomical structures between US and MRI during synthesis, we further propose an adversarial structural constraint. A new cross-modal attention technique is proposed to utilise non-local spatial information, by encouraging multi-modal knowledge fusion and propagation. We extend the approach to consider the case where 3D auxiliary information (e.g., 3D neighbours and a 3D location index) from volumetric data is also available, and show that this improves image synthesis. The proposed approach is evaluated quantitatively and qualitatively with comparison to real fetal MR images and other approaches to synthesis, demonstrating its feasibility of synthesising realistic MR images.
Collapse
|
33
|
Yang X, Wang X, Wang Y, Dou H, Li S, Wen H, Lin Y, Heng PA, Ni D. Hybrid attention for automatic segmentation of whole fetal head in prenatal ultrasound volumes. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 194:105519. [PMID: 32447146 DOI: 10.1016/j.cmpb.2020.105519] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2019] [Revised: 04/05/2020] [Accepted: 04/23/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE Biometric measurements of fetal head are important indicators for maternal and fetal health monitoring during pregnancy. 3D ultrasound (US) has unique advantages over 2D scan in covering the whole fetal head and may promote the diagnoses. However, automatically segmenting the whole fetal head in US volumes still pends as an emerging and unsolved problem. The challenges that automated solutions need to tackle include the poor image quality, boundary ambiguity, long-span occlusion, and the appearance variability across different fetal poses and gestational ages. In this paper, we propose the first fully-automated solution to segment the whole fetal head in US volumes. METHODS The segmentation task is firstly formulated as an end-to-end volumetric mapping under an encoder-decoder deep architecture. We then combine the segmentor with a proposed hybrid attention scheme (HAS) to select discriminative features and suppress the non-informative volumetric features in a composite and hierarchical way. With little computation overhead, HAS proves to be effective in addressing boundary ambiguity and deficiency. To enhance the spatial consistency in segmentation, we further organize multiple segmentors in a cascaded fashion to refine the results by revisiting context in the prediction of predecessors. RESULTS Validated on a large dataset collected from 100 healthy volunteers, our method presents superior segmentation performance (DSC (Dice Similarity Coefficient), 96.05%), remarkable agreements with experts (-1.6±19.5 mL). With another 156 volumes collected from 52 volunteers, we ahieve high reproducibilities (mean standard deviation 11.524 mL) against scan variations. CONCLUSION This is the first investigation about whole fetal head segmentation in 3D US. Our method is promising to be a feasible solution in assisting the volumetric US-based prenatal studies.
Collapse
Affiliation(s)
- Xin Yang
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China.
| | - Xu Wang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China
| | - Yi Wang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China
| | - Haoran Dou
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China
| | - Shengli Li
- Department of Ultrasound, Affiliated Shenzhen Maternal and Child Healthcare Hospital of Nanfang Medical University, Shenzhen, China
| | - Huaxuan Wen
- Department of Ultrasound, Affiliated Shenzhen Maternal and Child Healthcare Hospital of Nanfang Medical University, Shenzhen, China
| | - Yi Lin
- Department of Ultrasound, Affiliated Shenzhen Maternal and Child Healthcare Hospital of Nanfang Medical University, Shenzhen, China
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Dong Ni
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China.
| |
Collapse
|
34
|
Hervella ÁS, Rouco J, Novo J, Ortega M. Learning the retinal anatomy from scarce annotated data using self-supervised multimodal reconstruction. Appl Soft Comput 2020. [DOI: 10.1016/j.asoc.2020.106210] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
35
|
Bastiaansen WAP, Rousian M, Steegers-Theunissen RPM, Niessen WJ, Koning A, Klein S. Towards Segmentation and Spatial Alignment of the Human Embryonic Brain Using Deep Learning for Atlas-Based Registration. BIOMEDICAL IMAGE REGISTRATION 2020. [PMCID: PMC7279927 DOI: 10.1007/978-3-030-50120-4_4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
36
|
Talaei-Khoei A, Tavana M, Wilson JM. A predictive analytics framework for identifying patients at risk of developing multiple medical complications caused by chronic diseases. Artif Intell Med 2019; 101:101750. [PMID: 31813486 DOI: 10.1016/j.artmed.2019.101750] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2018] [Revised: 07/07/2019] [Accepted: 10/30/2019] [Indexed: 01/22/2023]
Abstract
Chronic diseases often cause several medical complications. This paper aims to predict multiple complications among patients with a chronic disease. The literature uses single-task learning algorithms to predict complications independently and assumes no correlation among complications of chronic diseases. We propose two methods (independent prediction of complications with single-task learning and concurrent prediction of complications with multi-task learning) and show that medical complications of chronic diseases can be correlated. We use a case study and compare the performance of these two methods by predicting complications of hypertrophic cardiomyopathy on 106 predictors in 1078 electronic medical records from April 2009-April 2017, inclusive. The methods are implemented using logistic regression, artificial neural networks, decision trees, and support vector machines. The results show multi-task learning with logistic regression improves the performance of predictions in terms of both discrimination and calibration.
Collapse
Affiliation(s)
- Amir Talaei-Khoei
- Department of Information Systems, University of Nevada, Reno, USA; School of Software, University of Technology Sydney, Australia.
| | - Madjid Tavana
- Business Systems and Analytics Department, Distinguished Chair of Business Analytics, La Salle University, Philadelphia, USA; Business Information Systems Department, Faculty of Business Administration and Economics, University of Paderborn, Paderborn, Germany.
| | - James M Wilson
- School of Community Health Sciences, University of Nevada, Reno, USA.
| |
Collapse
|
37
|
Lin Z, Li S, Ni D, Liao Y, Wen H, Du J, Chen S, Wang T, Lei B. Multi-task learning for quality assessment of fetal head ultrasound images. Med Image Anal 2019; 58:101548. [PMID: 31525671 DOI: 10.1016/j.media.2019.101548] [Citation(s) in RCA: 43] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2019] [Revised: 07/15/2019] [Accepted: 08/23/2019] [Indexed: 11/26/2022]
Abstract
It is essential to measure anatomical parameters in prenatal ultrasound images for the growth and development of the fetus, which is highly relied on obtaining a standard plane. However, the acquisition of a standard plane is, in turn, highly subjective and depends on the clinical experience of sonographers. In order to deal with this challenge, we propose a new multi-task learning framework using a faster regional convolutional neural network (MF R-CNN) architecture for standard plane detection and quality assessment. MF R-CNN can identify the critical anatomical structure of the fetal head and analyze whether the magnification of the ultrasound image is appropriate, and then performs quality assessment of ultrasound images based on clinical protocols. Specifically, the first five convolution blocks of the MF R-CNN learn the features shared within the input data, which can be associated with the detection and classification tasks, and then extend to the task-specific output streams. In training, in order to speed up the different convergence of different tasks, we devise a section train method based on transfer learning. In addition, our proposed method also uses prior clinical and statistical knowledge to reduce the false detection rate. By identifying the key anatomical structure and magnification of the ultrasound image, we score the ultrasonic plane of fetal head to judge whether it is a standard image or not. Experimental results on our own-collected dataset show that our method can accurately make a quality assessment of an ultrasound plane within half a second. Our method achieves promising performance compared with state-of-the-art methods, which can improve the examination effectiveness and alleviate the measurement error caused by improper ultrasound scanning.
Collapse
Affiliation(s)
- Zehui Lin
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518060, China
| | - Shengli Li
- Department of Ultrasound, Affiliated Shenzhen Maternal and Child Healthcare Hospital of Nanfang Medical University, 3012 Fuqiang Rd, Shenzhen, 518060, China
| | - Dong Ni
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518060, China
| | - Yimei Liao
- Department of Ultrasound, Affiliated Shenzhen Maternal and Child Healthcare Hospital of Nanfang Medical University, 3012 Fuqiang Rd, Shenzhen, 518060, China
| | - Huaxuan Wen
- Department of Ultrasound, Affiliated Shenzhen Maternal and Child Healthcare Hospital of Nanfang Medical University, 3012 Fuqiang Rd, Shenzhen, 518060, China
| | - Jie Du
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518060, China
| | - Siping Chen
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518060, China
| | - Tianfu Wang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518060, China.
| | - Baiying Lei
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518060, China.
| |
Collapse
|
38
|
Zhu P, Li Z. Guideline-based learning for standard plane extraction in 3-D echocardiography. J Med Imaging (Bellingham) 2019; 5:044503. [PMID: 30840749 PMCID: PMC6245496 DOI: 10.1117/1.jmi.5.4.044503] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2018] [Accepted: 10/30/2018] [Indexed: 01/22/2023] Open
Abstract
The extraction of six standard planes in 3-D cardiac ultrasound plays an important role in clinical examination to analyze cardiac function. A guideline-based learning method for efficient and accurate standard plane extraction is proposed. A cardiac ultrasound guideline determines appropriate operation steps for clinical examinations. The idea of guideline-based learning is incorporating machine learning approaches into each stage of the guideline. First, Hough forest with hierarchical search is applied for 3-D feature point detection. Second, initial planes are determined using anatomical regularities according to the guideline. Finally, a regression forest integrated with constraints of plane regularities is applied for refining each plane. The proposed method was evaluated on a 3-D cardiac ultrasound dataset and a synthetic dataset. Compared with other plane extraction methods, it demonstrated an improved accuracy with a significantly faster running time of 0.8 s / volume . Furthermore, it showed the proposed method was robust for a range abnormalities and image qualities, which would be seen in clinical practice.
Collapse
Affiliation(s)
- Peifei Zhu
- Hitachi, Ltd., Research and Development Group, Tokyo, Japan
| | - Zisheng Li
- Hitachi, Ltd., Research and Development Group, Tokyo, Japan
| |
Collapse
|
39
|
van den Heuvel TLA, Petros H, Santini S, de Korte CL, van Ginneken B. Automated Fetal Head Detection and Circumference Estimation from Free-Hand Ultrasound Sweeps Using Deep Learning in Resource-Limited Countries. ULTRASOUND IN MEDICINE & BIOLOGY 2019; 45:773-785. [PMID: 30573305 DOI: 10.1016/j.ultrasmedbio.2018.09.015] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/27/2018] [Revised: 09/05/2018] [Accepted: 09/14/2018] [Indexed: 06/09/2023]
Abstract
Ultrasound imaging remains out of reach for most pregnant women in developing countries because it requires a trained sonographer to acquire and interpret the images. We address this problem by presenting a system that can automatically estimate the fetal head circumference (HC) from data obtained with use of the obstetric sweep protocol (OSP). The OSP consists of multiple pre-defined sweeps with the ultrasound transducer over the abdomen of the pregnant woman. The OSP can be taught within a day to any health care worker without prior knowledge of ultrasound. An experienced sonographer acquired both the standard plane-to obtain the reference HC-and the OSP from 183 pregnant women in St. Luke's Hospital, Wolisso, Ethiopia. The OSP data, which will most likely not contain the standard plane, was used to automatically estimate HC using two fully convolutional neural networks. First, a VGG-Net-inspired network was trained to automatically detect the frames that contained the fetal head. Second, a U-net-inspired network was trained to automatically measure the HC for all frames in which the first network detected a fetal head. The HC was estimated from these frame measurements, and the curve of Hadlock was used to determine gestational age (GA). The results indicated that most automatically estimated GAs fell within the P2.5-P97.5 interval of the Hadlock curve compared with the GAs obtained from the reference HC, so it is possible to automatically estimate GA from OSP data. Our method therefore has potential application for providing maternal care in resource-constrained countries.
Collapse
Affiliation(s)
- Thomas L A van den Heuvel
- Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, The Netherlands; Medical Ultrasound Imaging Center, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, The Netherlands.
| | - Hezkiel Petros
- St. Luke's Catholic Hospital and College of Nursing and Midwifery, Wolisso, Ethiopia
| | - Stefano Santini
- St. Luke's Catholic Hospital and College of Nursing and Midwifery, Wolisso, Ethiopia
| | - Chris L de Korte
- St. Luke's Catholic Hospital and College of Nursing and Midwifery, Wolisso, Ethiopia; Physics of Fluids Group, MIRA, University of Twente, The Netherlands
| | - Bram van Ginneken
- Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, The Netherlands; Fraunhofer MEVIS, Bremen, Germany
| |
Collapse
|
40
|
van den Heuvel TLA, de Bruijn D, de Korte CL, van Ginneken B. Automated measurement of fetal head circumference using 2D ultrasound images. PLoS One 2018; 13:e0200412. [PMID: 30138319 PMCID: PMC6107118 DOI: 10.1371/journal.pone.0200412] [Citation(s) in RCA: 71] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2018] [Accepted: 06/26/2018] [Indexed: 11/19/2022] Open
Abstract
In this paper we present a computer aided detection (CAD) system for automated measurement of the fetal head circumference (HC) in 2D ultrasound images for all trimesters of the pregnancy. The HC can be used to estimate the gestational age and monitor growth of the fetus. Automated HC assessment could be valuable in developing countries, where there is a severe shortage of trained sonographers. The CAD system consists of two steps: First, Haar-like features were computed from the ultrasound images to train a random forest classifier to locate the fetal skull. Secondly, the HC was extracted using Hough transform, dynamic programming and an ellipse fit. The CAD system was trained on 999 images and validated on an independent test set of 335 images from all trimesters. The test set was manually annotated by an experienced sonographer and a medical researcher. The reference gestational age (GA) was estimated using the crown-rump length measurement (CRL). The mean difference between the reference GA and the GA estimated by the experienced sonographer was 0.8 ± 2.6, -0.0 ± 4.6 and 1.9 ± 11.0 days for the first, second and third trimester, respectively. The mean difference between the reference GA and the GA estimated by the medical researcher was 1.6 ± 2.7, 2.0 ± 4.8 and 3.9 ± 13.7 days. The mean difference between the reference GA and the GA estimated by the CAD system was 0.6 ± 4.3, 0.4 ± 4.7 and 2.5 ± 12.4 days. The results show that the CAD system performs comparable to an experienced sonographer. The presented system shows similar or superior results compared to systems published in literature. This is the first automated system for HC assessment evaluated on a large test set which contained data of all trimesters of the pregnancy.
Collapse
Affiliation(s)
- Thomas L. A. van den Heuvel
- Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands
- Medical Ultrasound Imaging Center, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Dagmar de Bruijn
- Department of Obstetrics and Gynecology, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Chris L. de Korte
- Medical Ultrasound Imaging Center, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Bram van Ginneken
- Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands
- Fraunhofer MEVIS, Bremen, Germany
| |
Collapse
|
41
|
van den Heuvel TLA, de Bruijn D, de Korte CL, Ginneken BV. Automated measurement of fetal head circumference using 2D ultrasound images. PLoS One 2018. [PMID: 30138319 DOI: 10.5281/zenodo.1327317] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/29/2023] Open
Abstract
In this paper we present a computer aided detection (CAD) system for automated measurement of the fetal head circumference (HC) in 2D ultrasound images for all trimesters of the pregnancy. The HC can be used to estimate the gestational age and monitor growth of the fetus. Automated HC assessment could be valuable in developing countries, where there is a severe shortage of trained sonographers. The CAD system consists of two steps: First, Haar-like features were computed from the ultrasound images to train a random forest classifier to locate the fetal skull. Secondly, the HC was extracted using Hough transform, dynamic programming and an ellipse fit. The CAD system was trained on 999 images and validated on an independent test set of 335 images from all trimesters. The test set was manually annotated by an experienced sonographer and a medical researcher. The reference gestational age (GA) was estimated using the crown-rump length measurement (CRL). The mean difference between the reference GA and the GA estimated by the experienced sonographer was 0.8 ± 2.6, -0.0 ± 4.6 and 1.9 ± 11.0 days for the first, second and third trimester, respectively. The mean difference between the reference GA and the GA estimated by the medical researcher was 1.6 ± 2.7, 2.0 ± 4.8 and 3.9 ± 13.7 days. The mean difference between the reference GA and the GA estimated by the CAD system was 0.6 ± 4.3, 0.4 ± 4.7 and 2.5 ± 12.4 days. The results show that the CAD system performs comparable to an experienced sonographer. The presented system shows similar or superior results compared to systems published in literature. This is the first automated system for HC assessment evaluated on a large test set which contained data of all trimesters of the pregnancy.
Collapse
Affiliation(s)
- Thomas L A van den Heuvel
- Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands
- Medical Ultrasound Imaging Center, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Dagmar de Bruijn
- Department of Obstetrics and Gynecology, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Chris L de Korte
- Medical Ultrasound Imaging Center, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Bram van Ginneken
- Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands
- Fraunhofer MEVIS, Bremen, Germany
| |
Collapse
|