1
|
Zhang J, Xiao S, Zhu Y, Zhang Z, Cao H, Xie M, Zhang L. Advances in the Application of Artificial Intelligence in Fetal Echocardiography. J Am Soc Echocardiogr 2024; 37:550-561. [PMID: 38199332 DOI: 10.1016/j.echo.2023.12.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 12/23/2023] [Accepted: 12/23/2023] [Indexed: 01/12/2024]
Abstract
Congenital heart disease is a severe health risk for newborns. Early detection of abnormalities in fetal cardiac structure and function during pregnancy can help patients seek timely diagnostic and therapeutic advice, and early intervention planning can significantly improve fetal survival rates. Echocardiography is one of the most accessible and widely used diagnostic tools in the diagnosis of fetal congenital heart disease. However, traditional fetal echocardiography has limitations due to fetal, maternal, and ultrasound equipment factors and is highly dependent on the skill level of the operator. Artificial intelligence (AI) technology, with its rapid development utilizing advanced computer algorithms, has great potential to empower sonographers in time-saving and accurate diagnosis and to bridge the skill gap in different regions. In recent years, AI-assisted fetal echocardiography has been successfully applied to a wide range of ultrasound diagnoses. This review systematically reviews the applications of AI in the field of fetal echocardiography over the years in terms of image processing, biometrics, and disease diagnosis and provides an outlook for future research.
Collapse
Affiliation(s)
- Junmin Zhang
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Clinical Research Center for Medical Imaging, Hubei Province, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
| | - Sushan Xiao
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Clinical Research Center for Medical Imaging, Hubei Province, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
| | - Ye Zhu
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Clinical Research Center for Medical Imaging, Hubei Province, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
| | - Zisang Zhang
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Clinical Research Center for Medical Imaging, Hubei Province, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
| | - Haiyan Cao
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Clinical Research Center for Medical Imaging, Hubei Province, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
| | - Mingxing Xie
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Clinical Research Center for Medical Imaging, Hubei Province, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
| | - Li Zhang
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Clinical Research Center for Medical Imaging, Hubei Province, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China.
| |
Collapse
|
2
|
Vece CD, Lous ML, Dromey B, Vasconcelos F, David AL, Peebles D, Stoyanov D. Ultrasound Plane Pose Regression: Assessing Generalized Pose Coordinates in the Fetal Brain. IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS 2024; 6:41-52. [PMID: 38881728 PMCID: PMC7616102 DOI: 10.1109/tmrb.2023.3328638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/18/2024]
Abstract
In obstetric ultrasound (US) scanning, the learner's ability to mentally build a three-dimensional (3D) map of the fetus from a two-dimensional (2D) US image represents a significant challenge in skill acquisition. We aim to build a US plane localization system for 3D visualization, training, and guidance without integrating additional sensors. This work builds on top of our previous work, which predicts the six-dimensional (6D) pose of arbitrarily oriented US planes slicing the fetal brain with respect to a normalized reference frame using a convolutional neural network (CNN) regression network. Here, we analyze in detail the assumptions of the normalized fetal brain reference frame and quantify its accuracy with respect to the acquisition of transventricular (TV) standard plane (SP) for fetal biometry. We investigate the impact of registration quality in the training and testing data and its subsequent effect on trained models. Finally, we introduce data augmentations and larger training sets that improve the results of our previous work, achieving median errors of 2.97 mm and 6.63° for translation and rotation, respectively.
Collapse
Affiliation(s)
- Chiara Di Vece
- EPSRC Center for Interventional and Surgical Sciences and the Department of Computer Science, University College London, WC1E 6DB London, U.K
| | - Maela Le Lous
- WEISS, Elizabeth Garrett Anderson Institute for Women's Health, and the NIHR University College London Hospitals Biomedical Research Center, University College London, WC1E 6DB London, U.K
| | - Brian Dromey
- WEISS, Elizabeth Garrett Anderson Institute for Women's Health, and the NIHR University College London Hospitals Biomedical Research Center, University College London, WC1E 6DB London, U.K
| | - Francisco Vasconcelos
- EPSRC Center for Interventional and Surgical Sciences and the Department of Computer Science, University College London, WC1E 6DB London, U.K
| | - Anna L David
- WEISS, Elizabeth Garrett Anderson Institute for Women's Health, and the NIHR University College London Hospitals Biomedical Research Center, University College London, WC1E 6DB London, U.K
| | - Donald Peebles
- WEISS, Elizabeth Garrett Anderson Institute for Women's Health, and the NIHR University College London Hospitals Biomedical Research Center, University College London, WC1E 6DB London, U.K
| | - Danail Stoyanov
- EPSRC Center for Interventional and Surgical Sciences and the Department of Computer Science, University College London, WC1E 6DB London, U.K
| |
Collapse
|
3
|
Sarker MMK, Singh VK, Alsharid M, Hernandez-Cruz N, Papageorghiou AT, Noble JA. COMFormer: Classification of Maternal-Fetal and Brain Anatomy Using a Residual Cross-Covariance Attention Guided Transformer in Ultrasound. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2023; 70:1417-1427. [PMID: 37665699 DOI: 10.1109/tuffc.2023.3311879] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/06/2023]
Abstract
Monitoring the healthy development of a fetus requires accurate and timely identification of different maternal-fetal structures as they grow. To facilitate this objective in an automated fashion, we propose a deep-learning-based image classification architecture called the COMFormer to classify maternal-fetal and brain anatomical structures present in 2-D fetal ultrasound (US) images. The proposed architecture classifies the two subcategories separately: maternal-fetal (abdomen, brain, femur, thorax, mother's cervix (MC), and others) and brain anatomical structures [trans-thalamic (TT), trans-cerebellum (TC), trans-ventricular (TV), and non-brain (NB)]. Our proposed architecture relies on a transformer-based approach that leverages spatial and global features using a newly designed residual cross-variance attention block. This block introduces an advanced cross-covariance attention (XCA) mechanism to capture a long-range representation from the input using spatial (e.g., shape, texture, intensity) and global features. To build COMFormer, we used a large publicly available dataset (BCNatal) consisting of 12 400 images from 1792 subjects. Experimental results prove that COMFormer outperforms the recent CNN and transformer-based models by achieving 95.64% and 96.33% classification accuracy on maternal-fetal and brain anatomy, respectively.
Collapse
|
4
|
Jost E, Kosian P, Jimenez Cruz J, Albarqouni S, Gembruch U, Strizek B, Recker F. Evolving the Era of 5D Ultrasound? A Systematic Literature Review on the Applications for Artificial Intelligence Ultrasound Imaging in Obstetrics and Gynecology. J Clin Med 2023; 12:6833. [PMID: 37959298 PMCID: PMC10649694 DOI: 10.3390/jcm12216833] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 10/17/2023] [Accepted: 10/25/2023] [Indexed: 11/15/2023] Open
Abstract
Artificial intelligence (AI) has gained prominence in medical imaging, particularly in obstetrics and gynecology (OB/GYN), where ultrasound (US) is the preferred method. It is considered cost effective and easily accessible but is time consuming and hindered by the need for specialized training. To overcome these limitations, AI models have been proposed for automated plane acquisition, anatomical measurements, and pathology detection. This study aims to overview recent literature on AI applications in OB/GYN US imaging, highlighting their benefits and limitations. For the methodology, a systematic literature search was performed in the PubMed and Cochrane Library databases. Matching abstracts were screened based on the PICOS (Participants, Intervention or Exposure, Comparison, Outcome, Study type) scheme. Articles with full text copies were distributed to the sections of OB/GYN and their research topics. As a result, this review includes 189 articles published from 1994 to 2023. Among these, 148 focus on obstetrics and 41 on gynecology. AI-assisted US applications span fetal biometry, echocardiography, or neurosonography, as well as the identification of adnexal and breast masses, and assessment of the endometrium and pelvic floor. To conclude, the applications for AI-assisted US in OB/GYN are abundant, especially in the subspecialty of obstetrics. However, while most studies focus on common application fields such as fetal biometry, this review outlines emerging and still experimental fields to promote further research.
Collapse
Affiliation(s)
- Elena Jost
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Philipp Kosian
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Jorge Jimenez Cruz
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Shadi Albarqouni
- Department of Diagnostic and Interventional Radiology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
- Helmholtz AI, Helmholtz Munich, Ingolstädter Landstraße 1, 85764 Neuherberg, Germany
| | - Ulrich Gembruch
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Brigitte Strizek
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Florian Recker
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| |
Collapse
|
5
|
Horgan R, Nehme L, Abuhamad A. Artificial intelligence in obstetric ultrasound: A scoping review. Prenat Diagn 2023; 43:1176-1219. [PMID: 37503802 DOI: 10.1002/pd.6411] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 06/05/2023] [Accepted: 07/17/2023] [Indexed: 07/29/2023]
Abstract
The objective is to summarize the current use of artificial intelligence (AI) in obstetric ultrasound. PubMed, Cochrane Library, and ClinicalTrials.gov databases were searched using the following keywords "neural networks", OR "artificial intelligence", OR "machine learning", OR "deep learning", AND "obstetrics", OR "obstetrical", OR "fetus", OR "foetus", OR "fetal", OR "foetal", OR "pregnancy", or "pregnant", AND "ultrasound" from inception through May 2022. The search was limited to the English language. Studies were eligible for inclusion if they described the use of AI in obstetric ultrasound. Obstetric ultrasound was defined as the process of obtaining ultrasound images of a fetus, amniotic fluid, or placenta. AI was defined as the use of neural networks, machine learning, or deep learning methods. The authors' search identified a total of 127 papers that fulfilled our inclusion criteria. The current uses of AI in obstetric ultrasound include first trimester pregnancy ultrasound, assessment of placenta, fetal biometry, fetal echocardiography, fetal neurosonography, assessment of fetal anatomy, and other uses including assessment of fetal lung maturity and screening for risk of adverse pregnancy outcomes. AI holds the potential to improve the ultrasound efficiency, pregnancy outcomes in low resource settings, detection of congenital malformations and prediction of adverse pregnancy outcomes.
Collapse
Affiliation(s)
- Rebecca Horgan
- Division of Maternal Fetal Medicine, Department of Obstetrics & Gynecology, Eastern Virginia Medical School, Norfolk, Virginia, USA
| | - Lea Nehme
- Division of Maternal Fetal Medicine, Department of Obstetrics & Gynecology, Eastern Virginia Medical School, Norfolk, Virginia, USA
| | - Alfred Abuhamad
- Division of Maternal Fetal Medicine, Department of Obstetrics & Gynecology, Eastern Virginia Medical School, Norfolk, Virginia, USA
| |
Collapse
|
6
|
Balagalla UB, Jayasooriya J, de Alwis C, Subasinghe A. Automated segmentation of standard scanning planes to measure biometric parameters in foetal ultrasound images – a survey. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2023. [DOI: 10.1080/21681163.2023.2179343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
Affiliation(s)
- U. B. Balagalla
- Department of Electrical and Electronic Engineering, University of Sri Jayewardenepura, Nugegoda, Sri Lanka
| | - J.V.D. Jayasooriya
- Department of Electrical and Electronic Engineering, University of Sri Jayewardenepura, Nugegoda, Sri Lanka
| | - C. de Alwis
- Department of Electrical and Electronic Engineering, University of Sri Jayewardenepura, Nugegoda, Sri Lanka
| | - A. Subasinghe
- Department of Electrical and Electronic Engineering, University of Sri Jayewardenepura, Nugegoda, Sri Lanka
| |
Collapse
|
7
|
Fiorentino MC, Villani FP, Di Cosmo M, Frontoni E, Moccia S. A review on deep-learning algorithms for fetal ultrasound-image analysis. Med Image Anal 2023; 83:102629. [PMID: 36308861 DOI: 10.1016/j.media.2022.102629] [Citation(s) in RCA: 33] [Impact Index Per Article: 33.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 07/12/2022] [Accepted: 09/10/2022] [Indexed: 11/07/2022]
Abstract
Deep-learning (DL) algorithms are becoming the standard for processing ultrasound (US) fetal images. A number of survey papers in the field is today available, but most of them are focusing on a broader area of medical-image analysis or not covering all fetal US DL applications. This paper surveys the most recent work in the field, with a total of 153 research papers published after 2017. Papers are analyzed and commented from both the methodology and the application perspective. We categorized the papers into (i) fetal standard-plane detection, (ii) anatomical structure analysis and (iii) biometry parameter estimation. For each category, main limitations and open issues are presented. Summary tables are included to facilitate the comparison among the different approaches. In addition, emerging applications are also outlined. Publicly-available datasets and performance metrics commonly used to assess algorithm performance are summarized, too. This paper ends with a critical summary of the current state of the art on DL algorithms for fetal US image analysis and a discussion on current challenges that have to be tackled by researchers working in the field to translate the research methodology into actual clinical practice.
Collapse
Affiliation(s)
| | | | - Mariachiara Di Cosmo
- Department of Information Engineering, Università Politecnica delle Marche, Italy
| | - Emanuele Frontoni
- Department of Information Engineering, Università Politecnica delle Marche, Italy; Department of Political Sciences, Communication and International Relations, Università degli Studi di Macerata, Italy
| | - Sara Moccia
- The BioRobotics Institute and Department of Excellence in Robotics & AI, Scuola Superiore Sant'Anna, Italy
| |
Collapse
|
8
|
Alzubaidi M, Agus M, Alyafei K, Althelaya KA, Shah U, Abd-Alrazaq AA, Anbar M, Makhlouf M, Househ M. Towards deep observation: A systematic survey on artificial intelligence techniques to monitor fetus via Ultrasound Images. iScience 2022; 25:104713. [PMID: 35856024 PMCID: PMC9287600 DOI: 10.1016/j.isci.2022.104713] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 06/09/2022] [Accepted: 06/28/2022] [Indexed: 11/26/2022] Open
Abstract
Several reviews have been conducted regarding artificial intelligence (AI) techniques to improve pregnancy outcomes. But they are not focusing on ultrasound images. This survey aims to explore how AI can assist with fetal growth monitoring via ultrasound image. We reported our findings using the guidelines for PRISMA. We conducted a comprehensive search of eight bibliographic databases. Out of 1269 studies 107 are included. We found that 2D ultrasound images were more popular (88) than 3D and 4D ultrasound images (19). Classification is the most used method (42), followed by segmentation (31), classification integrated with segmentation (16) and other miscellaneous methods such as object-detection, regression, and reinforcement learning (18). The most common areas that gained traction within the pregnancy domain were the fetus head (43), fetus body (31), fetus heart (13), fetus abdomen (10), and the fetus face (10). This survey will promote the development of improved AI models for fetal clinical applications. Artificial intelligence studies to monitor fetal development via ultrasound images Fetal issues categorized based on four categories — general, head, heart, face, abdomen The most used AI techniques are classification, segmentation, object detection, and RL The research and practical implications are included.
Collapse
|
9
|
Can AI Automatically Assess Scan Quality of Hip Ultrasound? APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12084072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Ultrasound images can reliably detect Developmental Dysplasia of the Hip (DDH) during early infancy. Accuracy of diagnosis depends on the scan quality, which is subjectively assessed by the sonographer during ultrasound examination. Such assessment is prone to errors and often results in poor-quality scans not being reported, risking misdiagnosis. In this paper, we propose an Artificial Intelligence (AI) technique for automatically determining scan quality. We trained a Convolutional Neural Network (CNN) to categorize 3D Ultrasound (3DUS) hip scans as ‘adequate’ or ‘inadequate’ for diagnosis. We evaluated the performance of this AI technique on two datasets—Dataset 1 (DS1) consisting of 2187 3DUS images in which each image was assessed by one reader for scan quality on a scale of 1 (lowest quality) to 5 (optimal quality) and Dataset 2 (DS2) consisting of 107 3DUS images evaluated semi-quantitatively by four readers using a 10-point scoring system. As a binary classifier (adequate/inadequate), the AI technique gave highly accurate predictions on both datasets (DS1 accuracy = 96% and DS2 accuracy = 91%) and showed high agreement with expert readings in terms of Intraclass Correlation Coefficient (ICC) and Cohen’s kappa coefficient (K). Using our AI-based approach as a screening tool during ultrasound scanning or postprocessing would ensure high scan quality and lead to more reliable ultrasound hip examination in infants.
Collapse
|
10
|
Sun Y, Yang H, Zhou J, Wang Y. ISSMF: Integrated semantic and spatial information of multi-level features for automatic segmentation in prenatal ultrasound images. Artif Intell Med 2022; 125:102254. [DOI: 10.1016/j.artmed.2022.102254] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2020] [Revised: 12/27/2021] [Accepted: 02/05/2022] [Indexed: 11/02/2022]
|
11
|
Yu Y, Chen Z, Zhuang Y, Yi H, Han L, Chen K, Lin J. A guiding approach of Ultrasound scan for accurately obtaining standard diagnostic planes of fetal brain malformation. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2022; 30:1243-1260. [PMID: 36155489 DOI: 10.3233/xst-221278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
BACKGROUND Standard planes (SPs) are crucial for the diagnosis of fetal brain malformation. However, it is very time-consuming and requires extensive experiences to acquire the SPs accurately due to the large difference in fetal posture and the complexity of SPs definitions. OBJECTIVE This study aims to present a guiding approach that could assist sonographer to obtain the SPs more accurately and more quickly. METHODS To begin with, sonographer uses the 3D probe to scan the fetal head to obtain 3D volume data, and then we used affine transformation to calibrate 3D volume data to the standard body position and established the corresponding 3D head model in 'real time'. When the sonographer uses the 2D probe to scan a plane, the position of current plane can be clearly show in 3D head model by our RLNet (regression location network), which can conduct the sonographer to obtain the three SPs more accurately. When the three SPs are located, the sagittal plane and the coronal planes can be automatically generated according to the spatial relationship with the three SPs. RESULTS Experimental results conducted on 3200 2D US images show that the RLNet achieves average angle error of the transthalamic plane was 3.91±2.86°, which has a obvious improvement compared other published data. The automatically generated coronal and sagittal SPs conform the diagnostic criteria and the diagnostic requirements of fetal brain malformation. CONCLUSIONS A guiding scanning method based deep learning for ultrasonic brain malformation screening is firstly proposed and it has a pragmatic value for future clinical application.
Collapse
Affiliation(s)
- Yalan Yu
- College of Biomedical Engineering, Sichuan University, Chengdu, China
| | - Zhong Chen
- Department of US, General Hospital of Western Theater Command, Chengdu, China
| | - Yan Zhuang
- College of Biomedical Engineering, Sichuan University, Chengdu, China
| | - Heng Yi
- Department of US, General Hospital of Western Theater Command, Chengdu, China
| | - Lin Han
- College of Biomedical Engineering, Sichuan University, Chengdu, China
- Haihong Intellimage Medical Technology (Tianjin) Co., Ltd, Tianjin, China
| | - Ke Chen
- College of Biomedical Engineering, Sichuan University, Chengdu, China
| | - Jiangli Lin
- College of Biomedical Engineering, Sichuan University, Chengdu, China
| |
Collapse
|
12
|
Liu R, Liu M, Sheng B, Li H, Li P, Song H, Zhang P, Jiang L, Shen D. NHBS-Net: A Feature Fusion Attention Network for Ultrasound Neonatal Hip Bone Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3446-3458. [PMID: 34106849 DOI: 10.1109/tmi.2021.3087857] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Ultrasound is a widely used technology for diagnosing developmental dysplasia of the hip (DDH) because it does not use radiation. Due to its low cost and convenience, 2-D ultrasound is still the most common examination in DDH diagnosis. In clinical usage, the complexity of both ultrasound image standardization and measurement leads to a high error rate for sonographers. The automatic segmentation results of key structures in the hip joint can be used to develop a standard plane detection method that helps sonographers decrease the error rate. However, current automatic segmentation methods still face challenges in robustness and accuracy. Thus, we propose a neonatal hip bone segmentation network (NHBS-Net) for the first time for the segmentation of seven key structures. We design three improvements, an enhanced dual attention module, a two-class feature fusion module, and a coordinate convolution output head, to help segment different structures. Compared with current state-of-the-art networks, NHBS-Net gains outstanding performance accuracy and generalizability, as shown in the experiments. Additionally, image standardization is a common need in ultrasonography. The ability of segmentation-based standard plane detection is tested on a 50-image standard dataset. The experiments show that our method can help healthcare workers decrease their error rate from 6%-10% to 2%. In addition, the segmentation performance in another ultrasound dataset (fetal heart) demonstrates the ability of our network.
Collapse
|
13
|
Deepika P, Pabitha P. Evaluation of Convolutional Neural Network Architecture for Feasibility Analysis on Fetal Abdomen and Brain Images. JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS 2021. [DOI: 10.1166/jmihi.2021.3844] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
This research aims to evaluate the possibilities of fetus ultrasound image classification using machine learning algorithms as normal or abnormal. Most of the earlier research works have produced a high percentage of false-negative classification results—recent research work aimed
to reduce the rate of false-negative diagnoses. Also, the number of sonologists for analyzing prenatal ultrasound worldwide is very less and solved by developing an efficient algorithm, which reduces the percentage of false negatives in the diagnosis output. Several earlier research works
focused on analyzing fetal abdominal image or fetal head images, making the medical industry use two different diagnostic modules separately. This work aims to design and implement a convolution frame-work named as two Convolution Neural Network (tCNN) model for diagnosing any fetal images.
The proposed tCNN model diagnoses the fetal abdominal and fetal brain images and classify them as normal or abnormal. CNN1 of tCNN performs segmentation and classification based on the acceptance of abdomen circumference and stomach bubble, umbilical vein, and amniotic fluid measurements.
CNN2 shows based on head circumference and head and abdominal circumference, femur, crown-rump, and humerus lengths measured.With clinical validation, an extensive experiment carried out and the results compared with the experts in terms of segmentation accuracy and the obstetric
measurements. This paper provides a foundation for future multi-classification research works on diagnosing fetal intracranial abnormalities and differential diagnosis using machine learning algorithms.
Collapse
Affiliation(s)
- P. Deepika
- Department of Computer Science and Engineering, Rajalakshmi Institute of Technology, Chennai 600124, Tamil Nadu, India
| | - P. Pabitha
- Department of Computer Technology, MIT Campus, Anna University, Chennai 600044, Tamil Nadu, India
| |
Collapse
|
14
|
Hareendranathan AR, Chahal BS, Zonoobi D, Sukhdeep D, Jaremko JL. Artificial Intelligence to Automatically Assess Scan Quality in Hip Ultrasound. Indian J Orthop 2021; 55:1535-1542. [PMID: 35003541 PMCID: PMC8688598 DOI: 10.1007/s43465-021-00455-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Accepted: 07/04/2021] [Indexed: 02/04/2023]
Abstract
PURPOSE Since it is fast, inexpensive and increasingly portable, ultrasound can be used for early detection of Developmental Dysplasia of the Hip (DDH) in infants at point-of-care. However, accurate interpretation\is highly dependent on scan quality. Poor-quality images lead to misdiagnosis, but inexperienced users may not even recognize the deficiencies in the images. Currently, users assess scan quality subjectively, based on image landmarks which are prone to human errors. Instead, we propose using Artificial Intelligence (AI) to automatically assess scan quality. METHODS We trained separate Convolutional Neural Network (CNN) models to detect presence of each of four commonly used ultrasound landmarks in each hip image: straight horizontal iliac wing, labrum, os ischium and midportion of the femoral head. We used 100 3D ultrasound (3DUS) images for training and validated the technique on a set of 107 3DUS images also scored for landmarks by three non-expert readers and one expert radiologist. RESULTS We got AI ≥ 85% accuracy for all four landmarks (ilium = 0.89, labrum = 0.94, os ischium = 0.85, femoral head = 0.98) as a binary classifier between adequate and inadequate scan quality. Our technique also showed excellent agreement with manual assessment in terms of Intraclass Correlation Coefficient (ICC) and Cohen's kappa coefficient (K) for ilium (ICC = 0.81, K = 0.56), os ischium (ICC = 0.89, K = 0.63) and femoral head (ICC = 0.83, K = 0.66), and moderate to good agreement for labrum (ICC = 0.65, K = 0.33). CONCLUSION This new technique could ensure high scan quality and facilitate more widespread use of ultrasound in population screening of DDH.
Collapse
Affiliation(s)
| | - Baljot S. Chahal
- grid.17089.37Department of Radiology and Diagnostic Imaging, University of Alberta, Edmonton, T6G 2B7 Canada
| | | | - Dulai Sukhdeep
- grid.17089.37Department of Radiology and Diagnostic Imaging, University of Alberta, Edmonton, T6G 2B7 Canada
| | - Jacob L. Jaremko
- grid.17089.37Department of Radiology and Diagnostic Imaging, University of Alberta, Edmonton, T6G 2B7 Canada ,MEDO.ai Inc, Singapore, Singapore
| |
Collapse
|
15
|
Yang X, Dou H, Huang R, Xue W, Huang Y, Qian J, Zhang Y, Luo H, Guo H, Wang T, Xiong Y, Ni D. Agent With Warm Start and Adaptive Dynamic Termination for Plane Localization in 3D Ultrasound. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1950-1961. [PMID: 33784618 DOI: 10.1109/tmi.2021.3069663] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Accurate standard plane (SP) localization is the fundamental step for prenatal ultrasound (US) diagnosis. Typically, dozens of US SPs are collected to determine the clinical diagnosis. 2D US has to perform scanning for each SP, which is time-consuming and operator-dependent. While 3D US containing multiple SPs in one shot has the inherent advantages of less user-dependency and more efficiency. Automatically locating SP in 3D US is very challenging due to the huge search space and large fetal posture variations. Our previous study proposed a deep reinforcement learning (RL) framework with an alignment module and active termination to localize SPs in 3D US automatically. However, termination of agent search in RL is important and affects the practical deployment. In this study, we enhance our previous RL framework with a newly designed adaptive dynamic termination to enable an early stop for the agent searching, saving at most 67% inference time, thus boosting the accuracy and efficiency of the RL framework at the same time. Besides, we validate the effectiveness and generalizability of our algorithm extensively on our in-house multi-organ datasets containing 433 fetal brain volumes, 519 fetal abdomen volumes, and 683 uterus volumes. Our approach achieves localization error of 2.52mm/10.26° , 2.48mm/10.39° , 2.02mm/10.48° , 2.00mm/14.57° , 2.61mm/9.71° , 3.09mm/9.58° , 1.49mm/7.54° for the transcerebellar, transventricular, transthalamic planes in fetal brain, abdominal plane in fetal abdomen, and mid-sagittal, transverse and coronal planes in uterus, respectively. Experimental results show that our method is general and has the potential to improve the efficiency and standardization of US scanning.
Collapse
|
16
|
Yang X, Huang Y, Huang R, Dou H, Li R, Qian J, Huang X, Shi W, Chen C, Zhang Y, Wang H, Xiong Y, Ni D. Searching collaborative agents for multi-plane localization in 3D ultrasound. Med Image Anal 2021; 72:102119. [PMID: 34144345 DOI: 10.1016/j.media.2021.102119] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2020] [Revised: 03/30/2021] [Accepted: 05/14/2021] [Indexed: 11/29/2022]
Abstract
3D ultrasound (US) has become prevalent due to its rich spatial and diagnostic information not contained in 2D US. Moreover, 3D US can contain multiple standard planes (SPs) in one shot. Thus, automatically localizing SPs in 3D US has the potential to improve user-independence and scanning-efficiency. However, manual SP localization in 3D US is challenging because of the low image quality, huge search space and large anatomical variability. In this work, we propose a novel multi-agent reinforcement learning (MARL) framework to simultaneously localize multiple SPs in 3D US. Our contribution is four-fold. First, our proposed method is general and it can accurately localize multiple SPs in different challenging US datasets. Second, we equip the MARL system with a recurrent neural network (RNN) based collaborative module, which can strengthen the communication among agents and learn the spatial relationship among planes effectively. Third, we explore to adopt the neural architecture search (NAS) to automatically design the network architecture of both the agents and the collaborative module. Last, we believe we are the first to realize automatic SP localization in pelvic US volumes, and note that our approach can handle both normal and abnormal uterus cases. Extensively validated on two challenging datasets of the uterus and fetal brain, our proposed method achieves the average localization accuracy of 7.03∘/1.59mm and 9.75∘/1.19mm. Experimental results show that our light-weight MARL model has higher accuracy than state-of-the-art methods.
Collapse
Affiliation(s)
- Xin Yang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical Ultrasound Image Computing (MUSIC) Laboratory, Shenzhen University, Shenzhen, China
| | - Yuhao Huang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical Ultrasound Image Computing (MUSIC) Laboratory, Shenzhen University, Shenzhen, China
| | - Ruobing Huang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical Ultrasound Image Computing (MUSIC) Laboratory, Shenzhen University, Shenzhen, China
| | - Haoran Dou
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical Ultrasound Image Computing (MUSIC) Laboratory, Shenzhen University, Shenzhen, China
| | - Rui Li
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical Ultrasound Image Computing (MUSIC) Laboratory, Shenzhen University, Shenzhen, China
| | - Jikuan Qian
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical Ultrasound Image Computing (MUSIC) Laboratory, Shenzhen University, Shenzhen, China
| | - Xiaoqiong Huang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical Ultrasound Image Computing (MUSIC) Laboratory, Shenzhen University, Shenzhen, China
| | - Wenlong Shi
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical Ultrasound Image Computing (MUSIC) Laboratory, Shenzhen University, Shenzhen, China
| | - Chaoyu Chen
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical Ultrasound Image Computing (MUSIC) Laboratory, Shenzhen University, Shenzhen, China
| | - Yuanji Zhang
- Department of Ultrasound, Luohu People's Hospital, Shenzhen, China
| | - Haixia Wang
- Department of Ultrasound, Luohu People's Hospital, Shenzhen, China
| | - Yi Xiong
- Department of Ultrasound, Luohu People's Hospital, Shenzhen, China.
| | - Dong Ni
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical Ultrasound Image Computing (MUSIC) Laboratory, Shenzhen University, Shenzhen, China.
| |
Collapse
|
17
|
Yang X, Li H, Liu L, Ni D. Scale-aware Auto-context-guided Fetal US Segmentation with Structured Random Forests. BIO INTEGRATION 2020. [DOI: 10.15212/bioi-2020-0016] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Abstract Accurate measurement of fetal biometrics in ultrasound at different trimesters is essential in assisting clinicians to conduct pregnancy diagnosis. However, the accuracy of manual segmentation for measurement is highly user-dependent. Here, we design a general framework
for automatically segmenting fetal anatomical structures in two-dimensional (2D) ultrasound (US) images and thus make objective biometric measurements available. We first introduce structured random forests (SRFs) as the core discriminative predictor to recognize the region of fetal anatomical
structures with a primary classification map. The patch-wise joint labeling presented by SRFs has inherent advantages in identifying an ambiguous/fuzzy boundary and reconstructing incomplete anatomical boundary in US. Then, to get a more accurate and smooth classification map, a scale-aware
auto-context model is injected to enhance the contour details of the classification map from various visual levels. Final segmentation can be obtained from the converged classification map with thresholding. Our framework is validated on two important biometric measurements, which are fetal
head circumference (HC) and abdominal circumference (AC). The final results illustrate that our proposed method outperforms state-of-the-art methods in terms of segmentation accuracy.
Collapse
Affiliation(s)
- Xin Yang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen 518060,
China
| | - Haoming Li
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen
518060, China
| | - Li Liu
- Department of Electronic Engineering, the Chinese University of Hong Kong, Hong Kong, China
| | - Dong Ni
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen 518060,
China
| |
Collapse
|
18
|
Yang X, Wang X, Wang Y, Dou H, Li S, Wen H, Lin Y, Heng PA, Ni D. Hybrid attention for automatic segmentation of whole fetal head in prenatal ultrasound volumes. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 194:105519. [PMID: 32447146 DOI: 10.1016/j.cmpb.2020.105519] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2019] [Revised: 04/05/2020] [Accepted: 04/23/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE Biometric measurements of fetal head are important indicators for maternal and fetal health monitoring during pregnancy. 3D ultrasound (US) has unique advantages over 2D scan in covering the whole fetal head and may promote the diagnoses. However, automatically segmenting the whole fetal head in US volumes still pends as an emerging and unsolved problem. The challenges that automated solutions need to tackle include the poor image quality, boundary ambiguity, long-span occlusion, and the appearance variability across different fetal poses and gestational ages. In this paper, we propose the first fully-automated solution to segment the whole fetal head in US volumes. METHODS The segmentation task is firstly formulated as an end-to-end volumetric mapping under an encoder-decoder deep architecture. We then combine the segmentor with a proposed hybrid attention scheme (HAS) to select discriminative features and suppress the non-informative volumetric features in a composite and hierarchical way. With little computation overhead, HAS proves to be effective in addressing boundary ambiguity and deficiency. To enhance the spatial consistency in segmentation, we further organize multiple segmentors in a cascaded fashion to refine the results by revisiting context in the prediction of predecessors. RESULTS Validated on a large dataset collected from 100 healthy volunteers, our method presents superior segmentation performance (DSC (Dice Similarity Coefficient), 96.05%), remarkable agreements with experts (-1.6±19.5 mL). With another 156 volumes collected from 52 volunteers, we ahieve high reproducibilities (mean standard deviation 11.524 mL) against scan variations. CONCLUSION This is the first investigation about whole fetal head segmentation in 3D US. Our method is promising to be a feasible solution in assisting the volumetric US-based prenatal studies.
Collapse
Affiliation(s)
- Xin Yang
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China.
| | - Xu Wang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China
| | - Yi Wang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China
| | - Haoran Dou
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China
| | - Shengli Li
- Department of Ultrasound, Affiliated Shenzhen Maternal and Child Healthcare Hospital of Nanfang Medical University, Shenzhen, China
| | - Huaxuan Wen
- Department of Ultrasound, Affiliated Shenzhen Maternal and Child Healthcare Hospital of Nanfang Medical University, Shenzhen, China
| | - Yi Lin
- Department of Ultrasound, Affiliated Shenzhen Maternal and Child Healthcare Hospital of Nanfang Medical University, Shenzhen, China
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Dong Ni
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China.
| |
Collapse
|
19
|
Xie B, Lei T, Wang N, Cai H, Xian J, He M, Zhang L, Xie H. Computer-aided diagnosis for fetal brain ultrasound images using deep convolutional neural networks. Int J Comput Assist Radiol Surg 2020; 15:1303-1312. [PMID: 32488568 DOI: 10.1007/s11548-020-02182-3] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2019] [Accepted: 04/23/2020] [Indexed: 11/29/2022]
Abstract
PURPOSE Fetal brain abnormalities are some of the most common congenital malformations that may associated with syndromic and chromosomal malformations, and could lead to neurodevelopmental delay and mental retardation. Early prenatal detection of brain abnormalities is essential for informing clinical management pathways and consulting for parents. The purpose of this research is to develop computer-aided diagnosis algorithms for five common fetal brain abnormalities, which may provide assistance to doctors for brain abnormalities detection in antenatal neurosonographic assessment. METHODS We applied a classifier to classify images of fetal brain standard planes (transventricular and transcerebellar) as normal or abnormal. The classifier was trained by image-level labeled images. In the first step, craniocerebral regions were segmented from the ultrasound images. Then, these segmentations were classified into four categories. Last, the lesions in the abnormal images were localized by class activation mapping. RESULTS We evaluated our algorithms on real-world clinical datasets of fetal brain ultrasound images. We observed that the proposed method achieved a Dice score of 0.942 on craniocerebral region segmentation, an average F1-score of 0.96 on classification and an average mean IOU of 0.497 on lesion localization. CONCLUSION We present computer-aided diagnosis algorithms for fetal brain ultrasound images based on deep convolutional neural networks. Our algorithms could be potentially applied in diagnosis assistance and are expected to help junior doctors in making clinical decision and reducing false negatives of fetal brain abnormalities.
Collapse
Affiliation(s)
- Baihong Xie
- South China University of Technology, Guangzhou, China
| | - Ting Lei
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Zhongshan Er Road 58, Guangzhou, 510080, Guangdong, China
| | - Nan Wang
- Guangzhou Aiyunji Information Technology Co., Ltd., Guangzhou, China
| | - Hongmin Cai
- South China University of Technology, Guangzhou, China
| | - Jianbo Xian
- South China University of Technology, Guangzhou, China.,Guangzhou Aiyunji Information Technology Co., Ltd., Guangzhou, China
| | - Miao He
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Zhongshan Er Road 58, Guangzhou, 510080, Guangdong, China
| | - Lihe Zhang
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Zhongshan Er Road 58, Guangzhou, 510080, Guangdong, China
| | - Hongning Xie
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Zhongshan Er Road 58, Guangzhou, 510080, Guangdong, China.
| |
Collapse
|
20
|
Xu L, Liu M, Shen Z, Wang H, Liu X, Wang X, Wang S, Li T, Yu S, Hou M, Guo J, Zhang J, He Y. DW-Net: A cascaded convolutional neural network for apical four-chamber view segmentation in fetal echocardiography. Comput Med Imaging Graph 2019; 80:101690. [PMID: 31968286 DOI: 10.1016/j.compmedimag.2019.101690] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2019] [Revised: 12/19/2019] [Accepted: 12/20/2019] [Indexed: 01/22/2023]
Abstract
Fetal echocardiography (FE) is a widely used medical examination for early diagnosis of congenital heart disease (CHD). The apical four-chamber view (A4C) is an important view among early FE images. Accurate segmentation of crucial anatomical structures in the A4C view is a useful and important step for early diagnosis and timely treatment of CHDs. However, it is a challenging task due to several unfavorable factors: (a) artifacts and speckle noise produced by ultrasound imaging. (b) category confusion caused by the similarity of anatomical structures and variations of scanning angles. (c) missing boundaries. In this paper, we propose an end-to-end DW-Net for accurate segmentation of seven important anatomical structures in the A4C view. The network comprises two components: 1) a Dilated Convolutional Chain (DCC) for "gridding issue" reduction, multi-scale contextual information aggregation and accurate localization of cardiac chambers. 2) a W-Net for gaining more precise boundaries and yielding refined segmentation results. Extensive experiments of the proposed method on a dataset of 895 A4C views have demonstrated that DW-Net can achieve good segmentation results, including the Dice Similarity Coefficient (DSC) of 0.827, the Pixel Accuracy (PA) of 0.933, the AUC of 0.990 and it substantially outperformed some well-known segmentation methods. Our work was highly valued by experienced clinicians. The accurate and automatic segmentation of the A4C view using the proposed DW-Net can benefit further extractions of useful clinical indicators in early FE and improve the prenatal diagnostic accuracy and efficiency of CHDs.
Collapse
Affiliation(s)
- Lu Xu
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China; Heifei Innovation Research Institute, Beihang University, Hefei, China; Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China; Beijing Advanced Innovation Centre for Big Data-Based Precision Medicine, Beihang University, Beijing, China; School of Biomedical Engineering, Anhui Medical University, Heifei, China
| | - Mingyuan Liu
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China; Heifei Innovation Research Institute, Beihang University, Hefei, China; Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China; Beijing Advanced Innovation Centre for Big Data-Based Precision Medicine, Beihang University, Beijing, China; School of Biomedical Engineering, Anhui Medical University, Heifei, China
| | - Zhenrong Shen
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China; Heifei Innovation Research Institute, Beihang University, Hefei, China; Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China; Beijing Advanced Innovation Centre for Big Data-Based Precision Medicine, Beihang University, Beijing, China; School of Biomedical Engineering, Anhui Medical University, Heifei, China
| | - Hua Wang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China; Heifei Innovation Research Institute, Beihang University, Hefei, China; Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China; Beijing Advanced Innovation Centre for Big Data-Based Precision Medicine, Beihang University, Beijing, China; School of Biomedical Engineering, Anhui Medical University, Heifei, China
| | - Xiaowei Liu
- Department of Ultrasound, Beijing Anzhen Hospital, Capital Medical University, Beijing, China
| | - Xin Wang
- Department of Ultrasound, Beijing Anzhen Hospital, Capital Medical University, Beijing, China
| | - Siyu Wang
- Department of Ultrasound, Beijing Anzhen Hospital, Capital Medical University, Beijing, China
| | - Tiefeng Li
- Department of Ultrasound, Beijing Anzhen Hospital, Capital Medical University, Beijing, China
| | - Shaomei Yu
- Department of Ultrasound, Beijing Anzhen Hospital, Capital Medical University, Beijing, China
| | - Min Hou
- Department of Ultrasound, Beijing Anzhen Hospital, Capital Medical University, Beijing, China
| | - Jianhua Guo
- Department of Ultrasound, Beijing Anzhen Hospital, Capital Medical University, Beijing, China
| | - Jicong Zhang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China; Heifei Innovation Research Institute, Beihang University, Hefei, China; Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China; Beijing Advanced Innovation Centre for Big Data-Based Precision Medicine, Beihang University, Beijing, China; School of Biomedical Engineering, Anhui Medical University, Heifei, China.
| | - Yihua He
- Department of Ultrasound, Beijing Anzhen Hospital, Capital Medical University, Beijing, China.
| |
Collapse
|
21
|
Lin Z, Li S, Ni D, Liao Y, Wen H, Du J, Chen S, Wang T, Lei B. Multi-task learning for quality assessment of fetal head ultrasound images. Med Image Anal 2019; 58:101548. [PMID: 31525671 DOI: 10.1016/j.media.2019.101548] [Citation(s) in RCA: 43] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2019] [Revised: 07/15/2019] [Accepted: 08/23/2019] [Indexed: 11/26/2022]
Abstract
It is essential to measure anatomical parameters in prenatal ultrasound images for the growth and development of the fetus, which is highly relied on obtaining a standard plane. However, the acquisition of a standard plane is, in turn, highly subjective and depends on the clinical experience of sonographers. In order to deal with this challenge, we propose a new multi-task learning framework using a faster regional convolutional neural network (MF R-CNN) architecture for standard plane detection and quality assessment. MF R-CNN can identify the critical anatomical structure of the fetal head and analyze whether the magnification of the ultrasound image is appropriate, and then performs quality assessment of ultrasound images based on clinical protocols. Specifically, the first five convolution blocks of the MF R-CNN learn the features shared within the input data, which can be associated with the detection and classification tasks, and then extend to the task-specific output streams. In training, in order to speed up the different convergence of different tasks, we devise a section train method based on transfer learning. In addition, our proposed method also uses prior clinical and statistical knowledge to reduce the false detection rate. By identifying the key anatomical structure and magnification of the ultrasound image, we score the ultrasonic plane of fetal head to judge whether it is a standard image or not. Experimental results on our own-collected dataset show that our method can accurately make a quality assessment of an ultrasound plane within half a second. Our method achieves promising performance compared with state-of-the-art methods, which can improve the examination effectiveness and alleviate the measurement error caused by improper ultrasound scanning.
Collapse
Affiliation(s)
- Zehui Lin
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518060, China
| | - Shengli Li
- Department of Ultrasound, Affiliated Shenzhen Maternal and Child Healthcare Hospital of Nanfang Medical University, 3012 Fuqiang Rd, Shenzhen, 518060, China
| | - Dong Ni
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518060, China
| | - Yimei Liao
- Department of Ultrasound, Affiliated Shenzhen Maternal and Child Healthcare Hospital of Nanfang Medical University, 3012 Fuqiang Rd, Shenzhen, 518060, China
| | - Huaxuan Wen
- Department of Ultrasound, Affiliated Shenzhen Maternal and Child Healthcare Hospital of Nanfang Medical University, 3012 Fuqiang Rd, Shenzhen, 518060, China
| | - Jie Du
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518060, China
| | - Siping Chen
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518060, China
| | - Tianfu Wang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518060, China.
| | - Baiying Lei
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518060, China.
| |
Collapse
|
22
|
Sridar P, Kumar A, Quinton A, Nanan R, Kim J, Krishnakumar R. Decision Fusion-Based Fetal Ultrasound Image Plane Classification Using Convolutional Neural Networks. ULTRASOUND IN MEDICINE & BIOLOGY 2019; 45:1259-1273. [PMID: 30826153 DOI: 10.1016/j.ultrasmedbio.2018.11.016] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/08/2017] [Revised: 11/26/2018] [Accepted: 11/29/2018] [Indexed: 06/09/2023]
Abstract
Machine learning for ultrasound image analysis and interpretation can be helpful in automated image classification in large-scale retrospective analyses to objectively derive new indicators of abnormal fetal development that are embedded in ultrasound images. Current approaches to automatic classification are limited to the use of either image patches (cropped images) or the global (whole) image. As many fetal organs have similar visual features, cropped images can misclassify certain structures such as the kidneys and abdomen. Also, the whole image does not encode sufficient local information about structures to identify different structures in different locations. Here we propose a method to automatically classify 14 different fetal structures in 2-D fetal ultrasound images by fusing information from both cropped regions of fetal structures and the whole image. Our method trains two feature extractors by fine-tuning pre-trained convolutional neural networks with the whole ultrasound fetal images and the discriminant regions of the fetal structures found in the whole image. The novelty of our method is in integrating the classification decisions made from the global and local features without relying on priors. In addition, our method can use the classification outcome to localize the fetal structures in the image. Our experiments on a data set of 4074 2-D ultrasound images (training: 3109, test: 965) achieved a mean accuracy of 97.05%, mean precision of 76.47% and mean recall of 75.41%. The Cohen κ of 0.72 revealed the highest agreement between the ground truth and the proposed method. The superiority of the proposed method over the other non-fusion-based methods is statistically significant (p < 0.05). We found that our method is capable of predicting images without ultrasound scanner overlays with a mean accuracy of 92%. The proposed method can be leveraged to retrospectively classify any ultrasound images in clinical research.
Collapse
Affiliation(s)
- Pradeeba Sridar
- Department of Engineering Design, Indian Institute of Technology Madras, India; School of Computer Science, University of Sydney, Sydney, New South Wales, Australia
| | - Ashnil Kumar
- School of Computer Science, University of Sydney, Sydney, New South Wales, Australia
| | - Ann Quinton
- Sydney Medical School, University of Sydney, Sydney, New South Wales, Australia
| | - Ralph Nanan
- Sydney Medical School, University of Sydney, Sydney, New South Wales, Australia
| | - Jinman Kim
- School of Computer Science, University of Sydney, Sydney, New South Wales, Australia
| | | |
Collapse
|
23
|
Yang X, Yu L, Li S, Wen H, Luo D, Bian C, Qin J, Ni D, Heng PA. Towards Automated Semantic Segmentation in Prenatal Volumetric Ultrasound. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:180-193. [PMID: 30040635 DOI: 10.1109/tmi.2018.2858779] [Citation(s) in RCA: 52] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Volumetric ultrasound is rapidly emerging as a viable imaging modality for routine prenatal examinations. Biometrics obtained from the volumetric segmentation shed light on the reformation of precise maternal and fetal health monitoring. However, the poor image quality, low contrast, boundary ambiguity, and complex anatomy shapes conspire toward a great lack of efficient tools for the segmentation. It makes 3-D ultrasound difficult to interpret and hinders the widespread of 3-D ultrasound in obstetrics. In this paper, we are looking at the problem of semantic segmentation in prenatal ultrasound volumes. Our contribution is threefold: 1) we propose the first and fully automatic framework to simultaneously segment multiple anatomical structures with intensive clinical interest, including fetus, gestational sac, and placenta, which remains a rarely studied and arduous challenge; 2) we propose a composite architecture for dense labeling, in which a customized 3-D fully convolutional network explores spatial intensity concurrency for initial labeling, while a multi-directional recurrent neural network (RNN) encodes spatial sequentiality to combat boundary ambiguity for significant refinement; and 3) we introduce a hierarchical deep supervision mechanism to boost the information flow within RNN and fit the latent sequence hierarchy in fine scales, and further improve the segmentation results. Extensively verified on in-house large data sets, our method illustrates a superior segmentation performance, decent agreements with expert measurements and high reproducibilities against scanning variations, and thus is promising in advancing the prenatal ultrasound examinations.
Collapse
|
24
|
Torrents-Barrena J, Piella G, Masoller N, Gratacós E, Eixarch E, Ceresa M, Ballester MÁG. Segmentation and classification in MRI and US fetal imaging: Recent trends and future prospects. Med Image Anal 2018; 51:61-88. [PMID: 30390513 DOI: 10.1016/j.media.2018.10.003] [Citation(s) in RCA: 43] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2017] [Revised: 10/09/2018] [Accepted: 10/18/2018] [Indexed: 12/19/2022]
Abstract
Fetal imaging is a burgeoning topic. New advancements in both magnetic resonance imaging and (3D) ultrasound currently allow doctors to diagnose fetal structural abnormalities such as those involved in twin-to-twin transfusion syndrome, gestational diabetes mellitus, pulmonary sequestration and hypoplasia, congenital heart disease, diaphragmatic hernia, ventriculomegaly, etc. Considering the continued breakthroughs in utero image analysis and (3D) reconstruction models, it is now possible to gain more insight into the ongoing development of the fetus. Best prenatal diagnosis performances rely on the conscious preparation of the clinicians in terms of fetal anatomy knowledge. Therefore, fetal imaging will likely span and increase its prevalence in the forthcoming years. This review covers state-of-the-art segmentation and classification methodologies for the whole fetus and, more specifically, the fetal brain, lungs, liver, heart and placenta in magnetic resonance imaging and (3D) ultrasound for the first time. Potential applications of the aforementioned methods into clinical settings are also inspected. Finally, improvements in existing approaches as well as most promising avenues to new areas of research are briefly outlined.
Collapse
Affiliation(s)
- Jordina Torrents-Barrena
- BCN MedTech, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain.
| | - Gemma Piella
- BCN MedTech, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain
| | - Narcís Masoller
- BCNatal - Barcelona Center for Maternal-Fetal and Neonatal Medicine (Hospital Clínic and Hospital Sant Joan de Déu), IDIBAPS, University of Barcelona, Barcelona, Spain; Center for Biomedical Research on Rare Diseases (CIBER-ER), Barcelona, Spain
| | - Eduard Gratacós
- BCNatal - Barcelona Center for Maternal-Fetal and Neonatal Medicine (Hospital Clínic and Hospital Sant Joan de Déu), IDIBAPS, University of Barcelona, Barcelona, Spain; Center for Biomedical Research on Rare Diseases (CIBER-ER), Barcelona, Spain
| | - Elisenda Eixarch
- BCNatal - Barcelona Center for Maternal-Fetal and Neonatal Medicine (Hospital Clínic and Hospital Sant Joan de Déu), IDIBAPS, University of Barcelona, Barcelona, Spain; Center for Biomedical Research on Rare Diseases (CIBER-ER), Barcelona, Spain
| | - Mario Ceresa
- BCN MedTech, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain
| | - Miguel Ángel González Ballester
- BCN MedTech, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain; ICREA, Barcelona, Spain
| |
Collapse
|
25
|
Li J, Wang Y, Lei B, Cheng JZ, Qin J, Wang T, Li S, Ni D. Automatic Fetal Head Circumference Measurement in Ultrasound Using Random Forest and Fast Ellipse Fitting. IEEE J Biomed Health Inform 2018; 22:215-223. [DOI: 10.1109/jbhi.2017.2703890] [Citation(s) in RCA: 53] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
26
|
Yaqub M, Kelly B, Papageorghiou AT, Noble JA. A Deep Learning Solution for Automatic Fetal Neurosonographic Diagnostic Plane Verification Using Clinical Standard Constraints. ULTRASOUND IN MEDICINE & BIOLOGY 2017; 43:2925-2933. [PMID: 28958729 DOI: 10.1016/j.ultrasmedbio.2017.07.013] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/21/2016] [Revised: 06/19/2017] [Accepted: 07/17/2017] [Indexed: 06/07/2023]
Abstract
During routine ultrasound assessment of the fetal brain for biometry estimation and detection of fetal abnormalities, accurate imaging planes must be found by sonologists following a well-defined imaging protocol or clinical standard, which can be difficult for non-experts to do well. This assessment helps provide accurate biometry estimation and the detection of possible brain abnormalities. We describe a machine-learning method to assess automatically that transventricular ultrasound images of the fetal brain have been correctly acquired and meet the required clinical standard. We propose a deep learning solution, which breaks the problem down into three stages: (i) accurate localization of the fetal brain, (ii) detection of regions that contain structures of interest and (iii) learning the acoustic patterns in the regions that enable plane verification. We evaluate the developed methodology on a large real-world clinical data set of 2-D mid-gestation fetal images. We show that the automatic verification method approaches human expert assessment.
Collapse
Affiliation(s)
- Mohammad Yaqub
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK.
| | - Brenda Kelly
- Nuffield Department of Obstetrics & Gynaecology, University of Oxford, Oxford, UK
| | - Aris T Papageorghiou
- Nuffield Department of Obstetrics & Gynaecology, University of Oxford, Oxford, UK
| | - J Alison Noble
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| |
Collapse
|
27
|
Jang J, Park Y, Kim B, Lee SM, Kwon JY, Seo JK. Automatic Estimation of Fetal Abdominal Circumference From Ultrasound Images. IEEE J Biomed Health Inform 2017; 22:1512-1520. [PMID: 29990257 DOI: 10.1109/jbhi.2017.2776116] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Ultrasound diagnosis is routinely used in obstetrics and gynecology for fetal biometry, and owing to its time-consuming process, there has been a great demand for automatic estimation. However, the automated analysis of ultrasound images is complicated because they are patient specific, operator dependent, and machine specific. Among various types of fetal biometry, the accurate estimation of abdominal circumference (AC) is especially difficult to perform automatically because the abdomen has low contrast against surroundings, nonuniform contrast, and irregular shape compared to other parameters. We propose a method for the automatic estimation of the fetal AC from two-dimensional ultrasound data through a specially designed convolutional neural network (CNN), which takes account of doctors' decision process, anatomical structure, and the characteristics of the ultrasound image. The proposed method uses CNN to classify ultrasound images (stomach bubble, amniotic fluid, and umbilical vein) and Hough transformation for measuring AC. We test the proposed method using clinical ultrasound data acquired from 56 pregnant women. Experimental results show that, with relatively small training samples, the proposed CNN provides sufficient classification results for AC estimation through the Hough transformation. The proposed method automatically estimates AC from ultrasound images. The method is quantitatively evaluated and shows stable performance in most cases and even for ultrasound images deteriorated by shadowing artifacts. As a result of experiments for our acceptance check, the accuracies are 0.809 and 0.771 with expert 1 and expert 2, respectively, whereas the accuracy between the two experts is 0.905. However, for cases of oversized fetus, when the amniotic fluid is not observed or the abdominal area is distorted, it could not correctly estimate AC.
Collapse
|
28
|
Baumgartner CF, Kamnitsas K, Matthew J, Fletcher TP, Smith S, Koch LM, Kainz B, Rueckert D. SonoNet: Real-Time Detection and Localisation of Fetal Standard Scan Planes in Freehand Ultrasound. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:2204-2215. [PMID: 28708546 PMCID: PMC6051487 DOI: 10.1109/tmi.2017.2712367] [Citation(s) in RCA: 152] [Impact Index Per Article: 21.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2017] [Revised: 06/01/2017] [Accepted: 06/01/2017] [Indexed: 05/22/2023]
Abstract
Identifying and interpreting fetal standard scan planes during 2-D ultrasound mid-pregnancy examinations are highly complex tasks, which require years of training. Apart from guiding the probe to the correct location, it can be equally difficult for a non-expert to identify relevant structures within the image. Automatic image processing can provide tools to help experienced as well as inexperienced operators with these tasks. In this paper, we propose a novel method based on convolutional neural networks, which can automatically detect 13 fetal standard views in freehand 2-D ultrasound data as well as provide a localization of the fetal structures via a bounding box. An important contribution is that the network learns to localize the target anatomy using weak supervision based on image-level labels only. The network architecture is designed to operate in real-time while providing optimal output for the localization task. We present results for real-time annotation, retrospective frame retrieval from saved videos, and localization on a very large and challenging dataset consisting of images and video recordings of full clinical anomaly screenings. We found that the proposed method achieved an average F1-score of 0.798 in a realistic classification experiment modeling real-time detection, and obtained a 90.09% accuracy for retrospective frame retrieval. Moreover, an accuracy of 77.8% was achieved on the localization task.
Collapse
|
29
|
|
30
|
Chen H, Wu L, Dou Q, Qin J, Li S, Cheng JZ, Ni D, Heng PA. Ultrasound Standard Plane Detection Using a Composite Neural Network Framework. IEEE TRANSACTIONS ON CYBERNETICS 2017; 47:1576-1586. [PMID: 28371793 DOI: 10.1109/tcyb.2017.2685080] [Citation(s) in RCA: 81] [Impact Index Per Article: 11.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Ultrasound (US) imaging is a widely used screening tool for obstetric examination and diagnosis. Accurate acquisition of fetal standard planes with key anatomical structures is very crucial for substantial biometric measurement and diagnosis. However, the standard plane acquisition is a labor-intensive task and requires operator equipped with a thorough knowledge of fetal anatomy. Therefore, automatic approaches are highly demanded in clinical practice to alleviate the workload and boost the examination efficiency. The automatic detection of standard planes from US videos remains a challenging problem due to the high intraclass and low interclass variations of standard planes, and the relatively low image quality. Unlike previous studies which were specifically designed for individual anatomical standard planes, respectively, we present a general framework for the automatic identification of different standard planes from US videos. Distinct from conventional way that devises hand-crafted visual features for detection, our framework explores in- and between-plane feature learning with a novel composite framework of the convolutional and recurrent neural networks. To further address the issue of limited training data, a multitask learning framework is implemented to exploit common knowledge across detection tasks of distinctive standard planes for the augmentation of feature learning. Extensive experiments have been conducted on hundreds of US fetus videos to corroborate the better efficacy of the proposed framework on the difficult standard plane detection problem.
Collapse
|
31
|
Yu Z, Tan EL, Ni D, Qin J, Chen S, Li S, Lei B, Wang T. A Deep Convolutional Neural Network-Based Framework for Automatic Fetal Facial Standard Plane Recognition. IEEE J Biomed Health Inform 2017; 22:874-885. [PMID: 28534800 DOI: 10.1109/jbhi.2017.2705031] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Ultrasound imaging has become a prevalent examination method in prenatal diagnosis. Accurate acquisition of fetal facial standard plane (FFSP) is the most important precondition for subsequent diagnosis and measurement. In the past few years, considerable effort has been devoted to FFSP recognition using various hand-crafted features, but the recognition performance is still unsatisfactory due to the high intraclass variation of FFSPs and the high degree of visual similarity between FFSPs and other non-FFSPs. To improve the recognition performance, we propose a method to automatically recognize FFSP via a deep convolutional neural network (DCNN) architecture. The proposed DCNN consists of 16 convolutional layers with small 3 × 3 size kernels and three fully connected layers. A global average pooling is adopted in the last pooling layer to significantly reduce network parameters, which alleviates the overfitting problems and improves the performance under limited training data. Both the transfer learning strategy and a data augmentation technique tailored for FFSP are implemented to further boost the recognition performance. Extensive experiments demonstrate the advantage of our proposed method over traditional approaches and the effectiveness of DCNN to recognize FFSP for clinical diagnosis.
Collapse
|
32
|
Wu L, Cheng JZ, Li S, Lei B, Wang T, Ni D. FUIQA: Fetal Ultrasound Image Quality Assessment With Deep Convolutional Networks. IEEE TRANSACTIONS ON CYBERNETICS 2017; 47:1336-1349. [PMID: 28362600 DOI: 10.1109/tcyb.2017.2671898] [Citation(s) in RCA: 87] [Impact Index Per Article: 12.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
The quality of ultrasound (US) images for the obstetric examination is crucial for accurate biometric measurement. However, manual quality control is a labor intensive process and often impractical in a clinical setting. To improve the efficiency of examination and alleviate the measurement error caused by improper US scanning operation and slice selection, a computerized fetal US image quality assessment (FUIQA) scheme is proposed to assist the implementation of US image quality control in the clinical obstetric examination. The proposed FUIQA is realized with two deep convolutional neural network models, which are denoted as L-CNN and C-CNN, respectively. The L-CNN aims to find the region of interest (ROI) of the fetal abdominal region in the US image. Based on the ROI found by the L-CNN, the C-CNN evaluates the image quality by assessing the goodness of depiction for the key structures of stomach bubble and umbilical vein. To further boost the performance of the L-CNN, we augment the input sources of the neural network with the local phase features along with the original US data. It will be shown that the heterogeneous input sources will help to improve the performance of the L-CNN. The performance of the proposed FUIQA is compared with the subjective image quality evaluation results from three medical doctors. With comprehensive experiments, it will be illustrated that the computerized assessment with our FUIQA scheme can be comparable to the subjective ratings from medical doctors.
Collapse
|
33
|
Rouet L, Mory B, Attia E, Bredahl K, Long A, Ardon R. A minimally interactive and reproducible method for abdominal aortic aneurysm quantification in 3D ultrasound and computed tomography with implicit template deformations. Comput Med Imaging Graph 2016; 58:75-85. [PMID: 27939282 DOI: 10.1016/j.compmedimag.2016.11.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2016] [Revised: 09/20/2016] [Accepted: 11/07/2016] [Indexed: 10/20/2022]
Abstract
The maximum diameter of abdominal aortic aneurysm (AAA) is a key quantification parameter for disease assessment. Although it is routinely measured on 2D-ultrasound images, using a volumetric approach is expected to improve measurement reproducibility. In this work, 3D-ultrasound or computed tomography imaging of patients with AAA was combined with a minimally interactive 3D segmentation based on implicit template deformation. Segmentation usability and reproducibility were evaluated on 81 patients, showing a mean measurement time of [2;8]min per case, and Dice coefficients of 0.87±0.12 for 3D-US and 0.81±0.08 for CT. Quantification parameters included a diameter measurement from 3D-US and CT volumes with respective confidence intervals of 0.51 [-2.5;3.52]mm and 1.00 [-1.68;3.67]mm. Additional volume measurements showed confidence intervals of 0.91 [-4.17;5.99]ml for 3D-US and 4.10 [-4.11;12.30]ml for CT.
Collapse
Affiliation(s)
- L Rouet
- Philips Research, 33 rue de Verdun, 92156 Suresnes Cedex, France.
| | - B Mory
- Philips Research, 33 rue de Verdun, 92156 Suresnes Cedex, France
| | - E Attia
- Philips Research, 33 rue de Verdun, 92156 Suresnes Cedex, France
| | - K Bredahl
- Department of Vascular Surgery, Rigshospitalet, Univ. of Copenhagen, Blegdamsvej 9, 2100 Copenhagen, Denmark
| | - A Long
- Médecine Vasculaire, Hôpital Edouard Herriot, Hospices Civils de Lyon, Place d'Arsonval, 69437 Lyon Cedex 03, France
| | - R Ardon
- Philips Research, 33 rue de Verdun, 92156 Suresnes Cedex, France
| |
Collapse
|
34
|
Real-Time Standard Scan Plane Detection and Localisation in Fetal Ultrasound Using Fully Convolutional Neural Networks. ACTA ACUST UNITED AC 2016. [DOI: 10.1007/978-3-319-46723-8_24] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/16/2023]
|
35
|
Banerjee J, Klink C, Niessen WJ, Moelker A, van Walsum T. 4D Ultrasound Tracking of Liver and its Verification for TIPS Guidance. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:52-62. [PMID: 26168435 DOI: 10.1109/tmi.2015.2454056] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
In this work we describe a 4D registration method for on the fly stabilization of ultrasound volumes for improving image guidance for transjugular intrahepatic portosystemic shunt (TIPS) interventions. The purpose of the method is to enable a continuous visualization of the relevant anatomical planes (determined in a planning stage) in a free breathing patient during the intervention. This requires registration of the planning information to the interventional images, which is achieved in two steps. In the first step tracking is performed across the streaming input. An approximate transformation between the reference image and the incoming image is estimated by composing the intermediate transformations obtained from the tracking. In the second step a subsequent registration is performed between the reference image and the approximately transformed incoming image to account for the accumulation of error. The two step approach helps in reducing the search range and is robust under rotation. We additionally present an approach to initialize and verify the registration. Verification is required when the reference image (containing planning information) is acquired in the past and is not part of the (interventional) 4D ultrasound sequence. The verification score will help in invalidating the registration outcome, for instance, in the case of insufficient overlap or information between the registering images due to probe motion or loss of contact, respectively. We evaluate the method over thirteen 4D US sequences acquired from eight subjects. A graphics processing unit implementation runs the 4D tracking at 9 Hz with a mean registration error of 1.7 mm.
Collapse
|
36
|
Rueda S, Knight CL, Papageorghiou AT, Noble JA. Feature-based fuzzy connectedness segmentation of ultrasound images with an object completion step. Med Image Anal 2015; 26:30-46. [PMID: 26319973 PMCID: PMC4686006 DOI: 10.1016/j.media.2015.07.002] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2013] [Revised: 05/28/2015] [Accepted: 07/11/2015] [Indexed: 11/24/2022]
Abstract
Medical ultrasound (US) image segmentation and quantification can be challenging due to signal dropouts, missing boundaries, and presence of speckle, which gives images of similar objects quite different appearance. Typically, purely intensity-based methods do not lead to a good segmentation of the structures of interest. Prior work has shown that local phase and feature asymmetry, derived from the monogenic signal, extract structural information from US images. This paper proposes a new US segmentation approach based on the fuzzy connectedness framework. The approach uses local phase and feature asymmetry to define a novel affinity function, which drives the segmentation algorithm, incorporates a shape-based object completion step, and regularises the result by mean curvature flow. To appreciate the accuracy and robustness of the methodology across clinical data of varying appearance and quality, a novel entropy-based quantitative image quality assessment of the different regions of interest is introduced. The new method is applied to 81 US images of the fetal arm acquired at multiple gestational ages, as a means to define a new automated image-based biomarker of fetal nutrition. Quantitative and qualitative evaluation shows that the segmentation method is comparable to manual delineations and robust across image qualities that are typical of clinical practice.
Collapse
Affiliation(s)
- Sylvia Rueda
- Centre of Excellence in Personalised Healthcare, Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Old Road Campus Research Building, Headington, OX3 7DQ Oxford, UK.
| | - Caroline L Knight
- Centre of Excellence in Personalised Healthcare, Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Old Road Campus Research Building, Headington, OX3 7DQ Oxford, UK; Nuffield Department of Obstetrics & Gynaecology, University of Oxford, Oxford, U.K
| | - Aris T Papageorghiou
- Nuffield Department of Obstetrics & Gynaecology, University of Oxford, Oxford, U.K; Oxford Maternal & Perinatal Health Institute, Green Templeton College, University of Oxford, Oxford, UK
| | - J Alison Noble
- Centre of Excellence in Personalised Healthcare, Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Old Road Campus Research Building, Headington, OX3 7DQ Oxford, UK
| |
Collapse
|
37
|
Chen H, Ni D, Qin J, Li S, Yang X, Wang T, Heng PA. Standard Plane Localization in Fetal Ultrasound via Domain Transferred Deep Neural Networks. IEEE J Biomed Health Inform 2015; 19:1627-36. [DOI: 10.1109/jbhi.2015.2425041] [Citation(s) in RCA: 224] [Impact Index Per Article: 24.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
38
|
Guided Random Forests for Identification of Key Fetal Anatomy and Image Categorization in Ultrasound Scans. LECTURE NOTES IN COMPUTER SCIENCE 2015. [DOI: 10.1007/978-3-319-24574-4_82] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
|
39
|
Chen H, Dou Q, Ni D, Cheng JZ, Qin J, Li S, Heng PA. Automatic Fetal Ultrasound Standard Plane Detection Using Knowledge Transferred Recurrent Neural Networks. LECTURE NOTES IN COMPUTER SCIENCE 2015. [DOI: 10.1007/978-3-319-24553-9_62] [Citation(s) in RCA: 70] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/05/2022]
|