1
|
Kim S, Fischetti C, Guy M, Hsu E, Fox J, Young SD. Artificial Intelligence (AI) Applications for Point of Care Ultrasound (POCUS) in Low-Resource Settings: A Scoping Review. Diagnostics (Basel) 2024; 14:1669. [PMID: 39125545 PMCID: PMC11312308 DOI: 10.3390/diagnostics14151669] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2024] [Revised: 07/26/2024] [Accepted: 07/28/2024] [Indexed: 08/12/2024] Open
Abstract
Advancements in artificial intelligence (AI) for point-of-care ultrasound (POCUS) have ushered in new possibilities for medical diagnostics in low-resource settings. This review explores the current landscape of AI applications in POCUS across these environments, analyzing studies sourced from three databases-SCOPUS, PUBMED, and Google Scholars. Initially, 1196 records were identified, of which 1167 articles were excluded after a two-stage screening, leaving 29 unique studies for review. The majority of studies focused on deep learning algorithms to facilitate POCUS operations and interpretation in resource-constrained settings. Various types of low-resource settings were targeted, with a significant emphasis on low- and middle-income countries (LMICs), rural/remote areas, and emergency contexts. Notable limitations identified include challenges in generalizability, dataset availability, regional disparities in research, patient compliance, and ethical considerations. Additionally, the lack of standardization in POCUS devices, protocols, and algorithms emerged as a significant barrier to AI implementation. The diversity of POCUS AI applications in different domains (e.g., lung, hip, heart, etc.) illustrates the challenges of having to tailor to the specific needs of each application. By separating out the analysis by application area, researchers will better understand the distinct impacts and limitations of AI, aligning research and development efforts with the unique characteristics of each clinical condition. Despite these challenges, POCUS AI systems show promise in bridging gaps in healthcare delivery by aiding clinicians in low-resource settings. Future research endeavors should prioritize addressing the gaps identified in this review to enhance the feasibility and effectiveness of POCUS AI applications to improve healthcare outcomes in resource-constrained environments.
Collapse
Affiliation(s)
- Seungjun Kim
- Department of Informatics, University of California, Irvine, CA 92697, USA;
| | - Chanel Fischetti
- Department of Emergency Medicine, Brigham and Women’s Hospital, Boston, MA 02115, USA
| | - Megan Guy
- Department of Emergency Medicine, University of California, Irvine, CA 92697, USA; (M.G.); (E.H.); (J.F.)
| | - Edmund Hsu
- Department of Emergency Medicine, University of California, Irvine, CA 92697, USA; (M.G.); (E.H.); (J.F.)
| | - John Fox
- Department of Emergency Medicine, University of California, Irvine, CA 92697, USA; (M.G.); (E.H.); (J.F.)
| | - Sean D. Young
- Department of Informatics, University of California, Irvine, CA 92697, USA;
- Department of Emergency Medicine, University of California, Irvine, CA 92697, USA; (M.G.); (E.H.); (J.F.)
| |
Collapse
|
2
|
Gleed AD, Mishra D, Self A, Thiruvengadam R, Desiraju BK, Bhatnagar S, Papageorghiou AT, Noble JA. Statistical Characterisation of Fetal Anatomy in Simple Obstetric Ultrasound Video Sweeps. ULTRASOUND IN MEDICINE & BIOLOGY 2024; 50:985-993. [PMID: 38692940 DOI: 10.1016/j.ultrasmedbio.2024.03.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/03/2023] [Revised: 03/14/2024] [Accepted: 03/15/2024] [Indexed: 05/03/2024]
Abstract
OBJECTIVE We present a statistical characterisation of fetal anatomies in obstetric ultrasound video sweeps where the transducer follows a fixed trajectory on the maternal abdomen. METHODS Large-scale, frame-level manual annotations of fetal anatomies (head, spine, abdomen, pelvis, femur) were used to compute common frame-level anatomy detection patterns expected for breech, cephalic, and transverse fetal presentations, with respect to video sweep paths. The patterns, termed statistical heatmaps, quantify the expected anatomies seen in a simple obstetric ultrasound video sweep protocol. In this study, a total of 760 unique manual annotations from 365 unique pregnancies were used. RESULTS We provide a qualitative interpretation of the heatmaps assessing the transducer sweep paths with respect to different fetal presentations and suggest ways in which the heatmaps can be applied in computational research (e.g., as a machine learning prior). CONCLUSION The heatmap parameters are freely available to other researchers (https://github.com/agleed/calopus_statistical_heatmaps).
Collapse
Affiliation(s)
- Alexander D Gleed
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK.
| | - Divyanshu Mishra
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| | - Alice Self
- Nuffield Department of Women's and Reproductive Health, University of Oxford, Oxford, UK
| | | | | | | | - Aris T Papageorghiou
- Nuffield Department of Women's and Reproductive Health, University of Oxford, Oxford, UK
| | - J Alison Noble
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| |
Collapse
|
3
|
Yasrab R, Zhao H, Fu Z, Drukker L, Papageorghiou AT, Noble JA. Automating the Human Action of First-Trimester Biometry Measurement from Real-World Freehand Ultrasound. ULTRASOUND IN MEDICINE & BIOLOGY 2024; 50:805-816. [PMID: 38467521 DOI: 10.1016/j.ultrasmedbio.2024.01.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Revised: 01/10/2024] [Accepted: 01/25/2024] [Indexed: 03/13/2024]
Abstract
OBJECTIVE Automated medical image analysis solutions should closely mimic complete human actions to be useful in clinical practice. However, more often an automated image analysis solution represents only part of a human task, which restricts its practical utility. In the case of ultrasound-based fetal biometry, an automated solution should ideally recognize key fetal structures in freehand video guidance, select a standard plane from a video stream and perform biometry. A complete automated solution should automate all three subactions. METHODS In this article, we consider how to automate the complete human action of first-trimester biometry measurement from real-world freehand ultrasound. In the proposed hybrid convolutional neural network (CNN) architecture design, a classification regression-based guidance model detects and tracks fetal anatomical structures (using visual cues) in the ultrasound video. Several high-quality standard planes that contain the mid-sagittal view of the fetus are sampled at multiple time stamps (using a custom-designed confident-frame detector) based on the estimated probability values associated with predicted anatomical structures that define the biometry plane. Automated semantic segmentation is performed on the selected frames to extract fetal anatomical landmarks. A crown-rump length (CRL) estimate is calculated as the mean CRL from these multiple frames. RESULTS Our fully automated method has a high correlation with clinical expert CRL measurement (Pearson's p = 0.92, R-squared [R2] = 0.84) and a low mean absolute error of 0.834 (weeks) for fetal age estimation on a test data set of 42 videos. CONCLUSION A novel algorithm for standard plane detection employs a quality detection mechanism defined by clinical standards, ensuring precise biometric measurements.
Collapse
Affiliation(s)
- Robail Yasrab
- Department of Engineering Science, University of Oxford, Oxford, UK; School of Clinical Medicine, University of Cambridge, Cambridge, UK.
| | - He Zhao
- Department of Engineering Science, University of Oxford, Oxford, UK
| | - Zeyu Fu
- Department of Engineering Science, University of Oxford, Oxford, UK
| | - Lior Drukker
- Department of Engineering Science, University of Oxford, Oxford, UK; Sackler Faculty of Medicine, Rabin Medical Center, Tel-Aviv University, Tel-Aviv, Israel
| | - Aris T Papageorghiou
- Nuffield Department of Women's Reproductive Health, University of Oxford, Oxford, UK
| | - J Alison Noble
- Department of Engineering Science, University of Oxford, Oxford, UK
| |
Collapse
|
4
|
Ou Z, Bai J, Chen Z, Lu Y, Wang H, Long S, Chen G. RTSeg-net: A lightweight network for real-time segmentation of fetal head and pubic symphysis from intrapartum ultrasound images. Comput Biol Med 2024; 175:108501. [PMID: 38703545 DOI: 10.1016/j.compbiomed.2024.108501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Revised: 03/19/2024] [Accepted: 04/21/2024] [Indexed: 05/06/2024]
Abstract
The segmentation of the fetal head (FH) and pubic symphysis (PS) from intrapartum ultrasound images plays a pivotal role in monitoring labor progression and informing crucial clinical decisions. Achieving real-time segmentation with high accuracy on systems with limited hardware capabilities presents significant challenges. To address these challenges, we propose the real-time segmentation network (RTSeg-Net), a groundbreaking lightweight deep learning model that incorporates innovative distribution shifting convolutional blocks, tokenized multilayer perceptron blocks, and efficient feature fusion blocks. Designed for optimal computational efficiency, RTSeg-Net minimizes resource demand while significantly enhancing segmentation performance. Our comprehensive evaluation on two distinct intrapartum ultrasound image datasets reveals that RTSeg-Net achieves segmentation accuracy on par with more complex state-of-the-art networks, utilizing merely 1.86 M parameters-just 6 % of their hyperparameters-and operating seven times faster, achieving a remarkable rate of 31.13 frames per second on a Jetson Nano, a device known for its limited computing capacity. These achievements underscore RTSeg-Net's potential to provide accurate, real-time segmentation on low-power devices, broadening the scope for its application across various stages of labor. By facilitating real-time, accurate ultrasound image analysis on portable, low-cost devices, RTSeg-Net promises to revolutionize intrapartum monitoring, making sophisticated diagnostic tools accessible to a wider range of healthcare settings.
Collapse
Affiliation(s)
- Zhanhong Ou
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China
| | - Jieyun Bai
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China; Auckland Bioengineering Institute, University of Auckland, Auckland, 1010, New Zealand.
| | - Zhide Chen
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China
| | - Yaosheng Lu
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China
| | - Huijin Wang
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China
| | - Shun Long
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China
| | - Gaowen Chen
- Obstetrics and Gynecology Center, Zhujiang Hospital, Southern Medical University, Guangzhou, 510280, China
| |
Collapse
|
5
|
Oghli MG, Bagheri SM, Shabanzadeh A, Mehrjardi MZ, Akhavan A, Shiri I, Taghipour M, Shabanzadeh Z. Fully automated kidney image biomarker prediction in ultrasound scans using Fast-Unet+. Sci Rep 2024; 14:4782. [PMID: 38413748 PMCID: PMC10899245 DOI: 10.1038/s41598-024-55106-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2023] [Accepted: 02/20/2024] [Indexed: 02/29/2024] Open
Abstract
Any kidney dimension and volume variation can be a remarkable indicator of kidney disorders. Precise kidney segmentation in standard planes plays an undeniable role in predicting kidney size and volume. On the other hand, ultrasound is the modality of choice in diagnostic procedures. This paper proposes a convolutional neural network with nested layers, namely Fast-Unet++, promoting the Fast and accurate Unet model. First, the model was trained and evaluated for segmenting sagittal and axial images of the kidney. Then, the predicted masks were used to estimate the kidney image biomarkers, including its volume and dimensions (length, width, thickness, and parenchymal thickness). Finally, the proposed model was tested on a publicly available dataset with various shapes and compared with the related networks. Moreover, the network was evaluated using a set of patients who had undergone ultrasound and computed tomography. The dice metric, Jaccard coefficient, and mean absolute distance were used to evaluate the segmentation step. 0.97, 0.94, and 3.23 mm for the sagittal frame, and 0.95, 0.9, and 3.87 mm for the axial frame were achieved. The kidney dimensions and volume were evaluated using accuracy, the area under the curve, sensitivity, specificity, precision, and F1.
Collapse
Affiliation(s)
| | - Seyed Morteza Bagheri
- Department of Radiology, Hasheminejad Kidney Center, Iran University of Medical Sciences, Tehran, Iran
| | - Ali Shabanzadeh
- Research and Development Department, Med Fanavaran Plus Co., Karaj, Iran
| | - Mohammad Zare Mehrjardi
- Section of Body Imaging, Division of Clinical Research, Climax Radiology Education Foundation, Tehran, Iran
| | - Ardavan Akhavan
- Research and Development Department, Med Fanavaran Plus Co., Karaj, Iran
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211, Geneva 4, Switzerland
| | - Mostafa Taghipour
- Research and Development Department, Med Fanavaran Plus Co., Karaj, Iran
| | - Zahra Shabanzadeh
- Research and Development Department, Med Fanavaran Plus Co., Karaj, Iran
| |
Collapse
|
6
|
Jost E, Kosian P, Jimenez Cruz J, Albarqouni S, Gembruch U, Strizek B, Recker F. Evolving the Era of 5D Ultrasound? A Systematic Literature Review on the Applications for Artificial Intelligence Ultrasound Imaging in Obstetrics and Gynecology. J Clin Med 2023; 12:6833. [PMID: 37959298 PMCID: PMC10649694 DOI: 10.3390/jcm12216833] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 10/17/2023] [Accepted: 10/25/2023] [Indexed: 11/15/2023] Open
Abstract
Artificial intelligence (AI) has gained prominence in medical imaging, particularly in obstetrics and gynecology (OB/GYN), where ultrasound (US) is the preferred method. It is considered cost effective and easily accessible but is time consuming and hindered by the need for specialized training. To overcome these limitations, AI models have been proposed for automated plane acquisition, anatomical measurements, and pathology detection. This study aims to overview recent literature on AI applications in OB/GYN US imaging, highlighting their benefits and limitations. For the methodology, a systematic literature search was performed in the PubMed and Cochrane Library databases. Matching abstracts were screened based on the PICOS (Participants, Intervention or Exposure, Comparison, Outcome, Study type) scheme. Articles with full text copies were distributed to the sections of OB/GYN and their research topics. As a result, this review includes 189 articles published from 1994 to 2023. Among these, 148 focus on obstetrics and 41 on gynecology. AI-assisted US applications span fetal biometry, echocardiography, or neurosonography, as well as the identification of adnexal and breast masses, and assessment of the endometrium and pelvic floor. To conclude, the applications for AI-assisted US in OB/GYN are abundant, especially in the subspecialty of obstetrics. However, while most studies focus on common application fields such as fetal biometry, this review outlines emerging and still experimental fields to promote further research.
Collapse
Affiliation(s)
- Elena Jost
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Philipp Kosian
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Jorge Jimenez Cruz
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Shadi Albarqouni
- Department of Diagnostic and Interventional Radiology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
- Helmholtz AI, Helmholtz Munich, Ingolstädter Landstraße 1, 85764 Neuherberg, Germany
| | - Ulrich Gembruch
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Brigitte Strizek
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Florian Recker
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| |
Collapse
|
7
|
Horgan R, Nehme L, Abuhamad A. Artificial intelligence in obstetric ultrasound: A scoping review. Prenat Diagn 2023; 43:1176-1219. [PMID: 37503802 DOI: 10.1002/pd.6411] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 06/05/2023] [Accepted: 07/17/2023] [Indexed: 07/29/2023]
Abstract
The objective is to summarize the current use of artificial intelligence (AI) in obstetric ultrasound. PubMed, Cochrane Library, and ClinicalTrials.gov databases were searched using the following keywords "neural networks", OR "artificial intelligence", OR "machine learning", OR "deep learning", AND "obstetrics", OR "obstetrical", OR "fetus", OR "foetus", OR "fetal", OR "foetal", OR "pregnancy", or "pregnant", AND "ultrasound" from inception through May 2022. The search was limited to the English language. Studies were eligible for inclusion if they described the use of AI in obstetric ultrasound. Obstetric ultrasound was defined as the process of obtaining ultrasound images of a fetus, amniotic fluid, or placenta. AI was defined as the use of neural networks, machine learning, or deep learning methods. The authors' search identified a total of 127 papers that fulfilled our inclusion criteria. The current uses of AI in obstetric ultrasound include first trimester pregnancy ultrasound, assessment of placenta, fetal biometry, fetal echocardiography, fetal neurosonography, assessment of fetal anatomy, and other uses including assessment of fetal lung maturity and screening for risk of adverse pregnancy outcomes. AI holds the potential to improve the ultrasound efficiency, pregnancy outcomes in low resource settings, detection of congenital malformations and prediction of adverse pregnancy outcomes.
Collapse
Affiliation(s)
- Rebecca Horgan
- Division of Maternal Fetal Medicine, Department of Obstetrics & Gynecology, Eastern Virginia Medical School, Norfolk, Virginia, USA
| | - Lea Nehme
- Division of Maternal Fetal Medicine, Department of Obstetrics & Gynecology, Eastern Virginia Medical School, Norfolk, Virginia, USA
| | - Alfred Abuhamad
- Division of Maternal Fetal Medicine, Department of Obstetrics & Gynecology, Eastern Virginia Medical School, Norfolk, Virginia, USA
| |
Collapse
|
8
|
Bastiaansen WAP, Klein S, Koning AHJ, Niessen WJ, Steegers-Theunissen RPM, Rousian M. Computational methods for the analysis of early-pregnancy brain ultrasonography: a systematic review. EBioMedicine 2023; 89:104466. [PMID: 36796233 PMCID: PMC9958260 DOI: 10.1016/j.ebiom.2023.104466] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 01/09/2023] [Accepted: 01/23/2023] [Indexed: 02/16/2023] Open
Abstract
BACKGROUND Early screening of the brain is becoming routine clinical practice. Currently, this screening is performed by manual measurements and visual analysis, which is time-consuming and prone to errors. Computational methods may support this screening. Hence, the aim of this systematic review is to gain insight into future research directions needed to bring automated early-pregnancy ultrasound analysis of the human brain to clinical practice. METHODS We searched PubMed (Medline ALL Ovid), EMBASE, Web of Science Core Collection, Cochrane Central Register of Controlled Trials, and Google Scholar, from inception until June 2022. This study is registered in PROSPERO at CRD42020189888. Studies about computational methods for the analysis of human brain ultrasonography acquired before the 20th week of pregnancy were included. The key reported attributes were: level of automation, learning-based or not, the usage of clinical routine data depicting normal and abnormal brain development, public sharing of program source code and data, and analysis of the confounding factors. FINDINGS Our search identified 2575 studies, of which 55 were included. 76% used an automatic method, 62% a learning-based method, 45% used clinical routine data and in addition, for 13% the data depicted abnormal development. None of the studies shared publicly the program source code and only two studies shared the data. Finally, 35% did not analyse the influence of confounding factors. INTERPRETATION Our review showed an interest in automatic, learning-based methods. To bring these methods to clinical practice we recommend that studies: use routine clinical data depicting both normal and abnormal development, make their dataset and program source code publicly available, and be attentive to the influence of confounding factors. Introduction of automated computational methods for early-pregnancy brain ultrasonography will save valuable time during screening, and ultimately lead to better detection, treatment and prevention of neuro-developmental disorders. FUNDING The Erasmus MC Medical Research Advisor Committee (grant number: FB 379283).
Collapse
Affiliation(s)
- Wietske A P Bastiaansen
- Department of Obstetrics and Gynecology, Erasmus MC, University Medical Center, Rotterdam, the Netherlands; Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, University Medical Center, Rotterdam, the Netherlands
| | - Stefan Klein
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, University Medical Center, Rotterdam, the Netherlands
| | - Anton H J Koning
- Department of Pathology, Erasmus MC, University Medical Center, Rotterdam, the Netherlands
| | - Wiro J Niessen
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, University Medical Center, Rotterdam, the Netherlands
| | | | - Melek Rousian
- Department of Obstetrics and Gynecology, Erasmus MC, University Medical Center, Rotterdam, the Netherlands.
| |
Collapse
|
9
|
Sjoerdsma M, Verstraeten SCFPM, Maas EJ, van de Vosse FN, van Sambeek MRHM, Lopata RGP. Spatiotemporal Registration of 3-D Multi-perspective Ultrasound Images of Abdominal Aortic Aneurysms. ULTRASOUND IN MEDICINE & BIOLOGY 2023; 49:318-332. [PMID: 36441033 DOI: 10.1016/j.ultrasmedbio.2022.09.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 08/02/2022] [Accepted: 09/07/2022] [Indexed: 06/16/2023]
Abstract
Methods for patient-specific abdominal aortic aneurysm (AAA) progression monitoring and rupture risk assessment are widely investigated. Three-dimensional ultrasound can visualize the AAA's complex geometry and displacement fields. However, ultrasound has a limited field of view and low frame rate (i.e., 3-8 Hz). This article describes an approach to enhance the temporal resolution and the field of view. First, the frame rate was increased for each data set by sequencing multiple blood pulse cycles into one cycle. The sequencing method uses the original frame rate and the estimated pulse wave rate obtained from AAA distension curves. Second, the temporal registration was applied to multi-perspective acquisitions of the same AAA. Third, the field of view was increased through spatial registration and fusion using an image feature-based phase-only correlation method and a wavelet transform, respectively. Temporal sequencing was fully correct in aortic phantoms and was successful in 51 of 62 AAA patients, yielding a factor 5 frame rate increase. Spatial registration of proximal and distal ultrasound acquisitions was successful in 32 of 37 different AAA patients, based on the comparison between the fused ultrasound and computed tomography segmentation (95th percentile Haussdorf distances and similarity indices of 4.2 ± 1.7 mm and 0.92 ± 0.02 mm, respectively). Furthermore, the field of view was enlarged by 9%-49%.
Collapse
Affiliation(s)
- Marloes Sjoerdsma
- Photoacoustics & Ultrasound Laboratory Eindhoven (PULS/e), Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands; Department of Vascular Surgery, Catharina Hospital Eindhoven, Eindhoven, The Netherlands.
| | - Sabine C F P M Verstraeten
- Photoacoustics & Ultrasound Laboratory Eindhoven (PULS/e), Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands; Cardiovascular Biomechanics Group, Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Esther J Maas
- Photoacoustics & Ultrasound Laboratory Eindhoven (PULS/e), Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands; Department of Vascular Surgery, Catharina Hospital Eindhoven, Eindhoven, The Netherlands
| | - Frans N van de Vosse
- Cardiovascular Biomechanics Group, Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Marc R H M van Sambeek
- Department of Vascular Surgery, Catharina Hospital Eindhoven, Eindhoven, The Netherlands
| | - Richard G P Lopata
- Photoacoustics & Ultrasound Laboratory Eindhoven (PULS/e), Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| |
Collapse
|
10
|
Gleed AD, Chen Q, Jackman J, Mishra D, Chandramohan V, Self A, Bhatnagar S, Papageorghiou AT, Noble JA. Automatic Image Guidance for Assessment of Placenta Location in Ultrasound Video Sweeps. ULTRASOUND IN MEDICINE & BIOLOGY 2023; 49:106-121. [PMID: 36241588 DOI: 10.1016/j.ultrasmedbio.2022.08.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Revised: 06/06/2022] [Accepted: 08/03/2022] [Indexed: 06/16/2023]
Abstract
Ultrasound-based assistive tools are aimed at reducing the high skill needed to interpret a scan by providing automatic image guidance. This may encourage uptake of ultrasound (US) clinical assessments in rural settings in low- and middle-income countries (LMICs), where well-trained sonographers can be scarce. This paper describes a new method that automatically generates an assistive video overlay to provide image guidance to a user to assess placenta location. The user captures US video by following a sweep protocol that scans a U-shape on the lower maternal abdomen. The sweep trajectory is simple and easy to learn. We initially explore a 2-D embedding of placenta shapes, mapping manually segmented placentas in US video frames to a 2-D space. We map 2013 frames from 11 videos. This provides insight into the spectrum of placenta shapes that appear when using the sweep protocol. We propose classification of the placenta shapes from three observed clusters: complex, tip and rectangular. We use this insight to design an effective automatic segmentation algorithm, combining a U-Net with a CRF-RNN module to enhance segmentation performance with respect to placenta shape. The U-Net + CRF-RNN algorithm automatically segments the placenta and maternal bladder. We assess segmentation performance using both area and shape metrics. We report results comparable to the state-of-the-art for automatic placenta segmentation on the Dice metric, achieving 0.83 ± 0.15 evaluated on 2127 frames from 10 videos. We also qualitatively evaluate 78,308 frames from 135 videos, assessing if the anatomical outline is correctly segmented. We found that addition of the CRF-RNN improves over a baseline U-Net when faced with a complex placenta shape, which we observe in our 2-D embedding, up to 14% with respect to the percentage shape error. From the segmentations, an assistive video overlay is automatically constructed that (i) highlights the placenta and bladder, (ii) determines the lower placenta edge and highlights this location as a point and (iii) labels a 2-cm clearance on the lower placenta edge. The 2-cm clearance is chosen to satisfy current clinical guidelines. We propose to assess the placenta location by comparing the 2-cm region and the bottom of the bladder, which represents a coarse localization of the cervix. Anatomically, the bladder must sit above the cervix region. We present proof-of-concept results for the video overlay.
Collapse
Affiliation(s)
- Alexander D Gleed
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK.
| | - Qingchao Chen
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| | - James Jackman
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| | - Divyanshu Mishra
- Translational Health Science and Technology Institute, Faridabad, India
| | | | - Alice Self
- Nuffield Department of Women's and Reproductive Health, University of Oxford, Oxford, UK
| | | | - Aris T Papageorghiou
- Nuffield Department of Women's and Reproductive Health, University of Oxford, Oxford, UK
| | - J Alison Noble
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| |
Collapse
|
11
|
Caspi Y, de Zwarte SMC, Iemenschot IJ, Lumbreras R, de Heus R, Bekker MN, Hulshoff Pol H. Automatic measurements of fetal intracranial volume from 3D ultrasound scans. FRONTIERS IN NEUROIMAGING 2022; 1:996702. [PMID: 37555155 PMCID: PMC10406279 DOI: 10.3389/fnimg.2022.996702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Accepted: 09/15/2022] [Indexed: 08/10/2023]
Abstract
Three-dimensional fetal ultrasound is commonly used to study the volumetric development of brain structures. To date, only a limited number of automatic procedures for delineating the intracranial volume exist. Hence, intracranial volume measurements from three-dimensional ultrasound images are predominantly performed manually. Here, we present and validate an automated tool to extract the intracranial volume from three-dimensional fetal ultrasound scans. The procedure is based on the registration of a brain model to a subject brain. The intracranial volume of the subject is measured by applying the inverse of the final transformation to an intracranial mask of the brain model. The automatic measurements showed a high correlation with manual delineation of the same subjects at two gestational ages, namely, around 20 and 30 weeks (linear fitting R2(20 weeks) = 0.88, R2(30 weeks) = 0.77; Intraclass Correlation Coefficients: 20 weeks=0.94, 30 weeks = 0.84). Overall, the automatic intracranial volumes were larger than the manually delineated ones (84 ± 16 vs. 76 ± 15 cm3; and 274 ± 35 vs. 237 ± 28 cm3), probably due to differences in cerebellum delineation. Notably, the automated measurements reproduced both the non-linear pattern of fetal brain growth and the increased inter-subject variability for older fetuses. By contrast, there was some disagreement between the manual and automatic delineation concerning the size of sexual dimorphism differences. The method presented here provides a relatively efficient way to delineate volumes of fetal brain structures like the intracranial volume automatically. It can be used as a research tool to investigate these structures in large cohorts, which will ultimately aid in understanding fetal structural human brain development.
Collapse
Affiliation(s)
- Yaron Caspi
- Department of Psychiatry, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, Netherlands
| | - Sonja M. C. de Zwarte
- Department of Psychiatry, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, Netherlands
| | - Iris J. Iemenschot
- Department of Psychiatry, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, Netherlands
| | - Raquel Lumbreras
- Department of Psychiatry, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, Netherlands
| | - Roel de Heus
- Department of Obstetrics and Gynaecology, St. Antonius Hospital, Utrecht, Netherlands
- Department of Obstetrics, University Medical Center Utrecht, Utrecht, Netherlands
| | - Mireille N. Bekker
- Department of Obstetrics, University Medical Center Utrecht, Utrecht, Netherlands
| | - Hilleke Hulshoff Pol
- Department of Psychiatry, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, Netherlands
- Department of Psychology, Utrecht University, Utrecht, Netherlands
| |
Collapse
|
12
|
Imaging fetal anatomy. Semin Cell Dev Biol 2022; 131:78-92. [PMID: 35282997 DOI: 10.1016/j.semcdb.2022.02.023] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Revised: 02/23/2022] [Accepted: 02/23/2022] [Indexed: 02/07/2023]
Abstract
Due to advancements in ultrasound techniques, the focus of antenatal ultrasound screening is moving towards the first trimester of pregnancy. The early first trimester however remains in part, a 'black box', due to the size of the developing embryo and the limitations of contemporary scanning techniques. Therefore there is a need for images of early anatomical developmental to improve our understanding of this area. By using new imaging techniques, we can not only obtain better images to further our knowledge of early embryonic development, but clear images of embryonic and fetal development can also be used in training for e.g. sonographers and fetal surgeons, or to educate parents expecting a child with a fetal anomaly. The aim of this review is to provide an overview of the past, present and future techniques used to capture images of the developing human embryo and fetus and provide the reader newest insights in upcoming and promising imaging techniques. The reader is taken from the earliest drawings of da Vinci, along the advancements in the fields of in utero ultrasound and MR imaging techniques towards high-resolution ex utero imaging using Micro-CT and ultra-high field MRI. Finally, a future perspective is given about the use of artificial intelligence in ultrasound and new potential imaging techniques such as synchrotron radiation-based CT to increase our knowledge regarding human development.
Collapse
|
13
|
Gomes RG, Vwalika B, Lee C, Willis A, Sieniek M, Price JT, Chen C, Kasaro MP, Taylor JA, Stringer EM, McKinney SM, Sindano N, Dahl GE, Goodnight W, Gilmer J, Chi BH, Lau C, Spitz T, Saensuksopa T, Liu K, Tiyasirichokchai T, Wong J, Pilgrim R, Uddin A, Corrado G, Peng L, Chou K, Tse D, Stringer JSA, Shetty S. A mobile-optimized artificial intelligence system for gestational age and fetal malpresentation assessment. COMMUNICATIONS MEDICINE 2022; 2:128. [PMID: 36249461 PMCID: PMC9553916 DOI: 10.1038/s43856-022-00194-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 09/28/2022] [Indexed: 11/05/2022] Open
Abstract
Background Fetal ultrasound is an important component of antenatal care, but shortage of adequately trained healthcare workers has limited its adoption in low-to-middle-income countries. This study investigated the use of artificial intelligence for fetal ultrasound in under-resourced settings. Methods Blind sweep ultrasounds, consisting of six freehand ultrasound sweeps, were collected by sonographers in the USA and Zambia, and novice operators in Zambia. We developed artificial intelligence (AI) models that used blind sweeps to predict gestational age (GA) and fetal malpresentation. AI GA estimates and standard fetal biometry estimates were compared to a previously established ground truth, and evaluated for difference in absolute error. Fetal malpresentation (non-cephalic vs cephalic) was compared to sonographer assessment. On-device AI model run-times were benchmarked on Android mobile phones. Results Here we show that GA estimation accuracy of the AI model is non-inferior to standard fetal biometry estimates (error difference -1.4 ± 4.5 days, 95% CI -1.8, -0.9, n = 406). Non-inferiority is maintained when blind sweeps are acquired by novice operators performing only two of six sweep motion types. Fetal malpresentation AUC-ROC is 0.977 (95% CI, 0.949, 1.00, n = 613), sonographers and novices have similar AUC-ROC. Software run-times on mobile phones for both diagnostic models are less than 3 s after completion of a sweep. Conclusions The gestational age model is non-inferior to the clinical standard and the fetal malpresentation model has high AUC-ROCs across operators and devices. Our AI models are able to run on-device, without internet connectivity, and provide feedback scores to assist in upleveling the capabilities of lightly trained ultrasound operators in low resource settings.
Collapse
Affiliation(s)
| | - Bellington Vwalika
- Department of Obstetrics and Gynaecology, University of Zambia School of Medicine, Lusaka, Zambia
- Department of Obstetrics and Gynecology, University of North Carolina School of Medicine, Chapel Hill, NC USA
| | | | | | | | - Joan T. Price
- Department of Obstetrics and Gynecology, University of North Carolina School of Medicine, Chapel Hill, NC USA
- UNC Global Projects—Zambia, LLC, Lusaka, Zambia
| | | | - Margaret P. Kasaro
- Department of Obstetrics and Gynaecology, University of Zambia School of Medicine, Lusaka, Zambia
- UNC Global Projects—Zambia, LLC, Lusaka, Zambia
| | | | - Elizabeth M. Stringer
- Department of Obstetrics and Gynecology, University of North Carolina School of Medicine, Chapel Hill, NC USA
| | | | | | | | - William Goodnight
- Department of Obstetrics and Gynaecology, University of Zambia School of Medicine, Lusaka, Zambia
| | | | - Benjamin H. Chi
- Department of Obstetrics and Gynecology, University of North Carolina School of Medicine, Chapel Hill, NC USA
- UNC Global Projects—Zambia, LLC, Lusaka, Zambia
| | | | | | | | - Kris Liu
- Google Health, Palo Alto, CA USA
| | | | | | | | | | | | | | | | | | - Jeffrey S. A. Stringer
- Department of Obstetrics and Gynecology, University of North Carolina School of Medicine, Chapel Hill, NC USA
- UNC Global Projects—Zambia, LLC, Lusaka, Zambia
| | | |
Collapse
|
14
|
Self A, Chen Q, Desiraju BK, Dhariwal S, Gleed AD, Mishra D, Thiruvengadam R, Chandramohan V, Craik R, Wilden E, Khurana A, Bhatnagar S, Papageorghiou AT, Noble JA. Developing Clinical Artificial Intelligence for Obstetric Ultrasound to Improve Access in Underserved Regions: Protocol for a Computer-Assisted Low-Cost Point-of-Care UltraSound (CALOPUS) Study. JMIR Res Protoc 2022; 11:e37374. [PMID: 36048518 PMCID: PMC9478819 DOI: 10.2196/37374] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2022] [Revised: 06/12/2022] [Accepted: 06/21/2022] [Indexed: 12/04/2022] Open
Abstract
BACKGROUND The World Health Organization recommends a package of pregnancy care that includes obstetric ultrasound scans. There are significant barriers to universal access to antenatal ultrasound, particularly because of the cost and need for maintenance of ultrasound equipment and a lack of trained personnel. As low-cost, handheld ultrasound devices have become widely available, the current roadblock is the global shortage of health care providers trained in obstetric scanning. OBJECTIVE The aim of this study is to improve pregnancy and risk assessment for women in underserved regions. Therefore, we are undertaking the Computer-Assisted Low-Cost Point-of-Care UltraSound (CALOPUS) project, bringing together experts in machine learning and clinical obstetric ultrasound. METHODS In this prospective study conducted in two clinical centers (United Kingdom and India), participating pregnant women were scanned and full-length ultrasounds were performed. Each woman underwent 2 consecutive ultrasound scans. The first was a series of simple, standardized ultrasound sweeps (the CALOPUS protocol), immediately followed by a routine, full clinical ultrasound examination that served as the comparator. We describe the development of a simple-to-use clinical protocol designed for nonexpert users to assess fetal viability, detect the presence of multiple pregnancies, evaluate placental location, assess amniotic fluid volume, determine fetal presentation, and perform basic fetal biometry. The CALOPUS protocol was designed using the smallest number of steps to minimize redundant information, while maximizing diagnostic information. Here, we describe how ultrasound videos and annotations are captured for machine learning. RESULTS Over 5571 scans have been acquired, from which 1,541,751 label annotations have been performed. An adapted protocol, including a low pelvic brim sweep and a well-filled maternal bladder, improved visualization of the cervix from 28% to 91% and classification of placental location from 82% to 94%. Excellent levels of intra- and interannotator agreement are achievable following training and standardization. CONCLUSIONS The CALOPUS study is a unique study that uses obstetric ultrasound videos and annotations from pregnancies dated from 11 weeks and followed up until birth using novel ultrasound and annotation protocols. The data from this study are being used to develop and test several different machine learning algorithms to address key clinical diagnostic questions pertaining to obstetric risk management. We also highlight some of the challenges and potential solutions to interdisciplinary multinational imaging collaboration. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) RR1-10.2196/37374.
Collapse
Affiliation(s)
- Alice Self
- Nuffield Department of Women's and Reproductive Health, University of Oxford, Oxford, United Kingdom
| | - Qingchao Chen
- Institute of Biomedical Engineering, University of Oxford, Oxford, United Kingdom
| | | | - Sumeet Dhariwal
- Institute of Biomedical Engineering, University of Oxford, Oxford, United Kingdom
| | - Alexander D Gleed
- Institute of Biomedical Engineering, University of Oxford, Oxford, United Kingdom
| | - Divyanshu Mishra
- Translational Health Science and Technology Institute, Faridabad, India
| | | | | | - Rachel Craik
- Nuffield Department of Women's and Reproductive Health, University of Oxford, Oxford, United Kingdom
| | - Elizabeth Wilden
- Institute of Biomedical Engineering, University of Oxford, Oxford, United Kingdom
| | | | | | - Aris T Papageorghiou
- Nuffield Department of Women's and Reproductive Health, University of Oxford, Oxford, United Kingdom
- Oxford Maternal & Perinatal Health Institute, Green Templeton College, University of Oxford, Oxford, United Kingdom
| | - J Alison Noble
- Institute of Biomedical Engineering, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
15
|
Balancing regional and global information: An interactive segmentation framework for ultrasound breast lesion. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/07/2022]
|
16
|
Shu X, Gu Y, Zhang X, Hu C, Cheng K. FCRB U-Net: A novel fully connected residual block U-Net for fetal cerebellum ultrasound image segmentation. Comput Biol Med 2022; 148:105693. [PMID: 35717404 DOI: 10.1016/j.compbiomed.2022.105693] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 05/15/2022] [Accepted: 05/31/2022] [Indexed: 11/29/2022]
Abstract
In this paper, we propose a novel U-Net with fully connected residual blocks (FCRB U-Net) for the fetal cerebellum Ultrasound image segmentation task. FCRB U-Net, an improved convolutional neural network (CNN) based on U-Net, replaces the double convolution operation in the original model with the fully connected residual block and embeds an effective channel attention module to enhance the extraction of valid features. Moreover, in the decoding stage, a feature reuse module is employed to form a fully connected decoder to make full use of deep features. FCRB U-Net can effectively alleviate the problem of the loss of feature information during the convolution process and improve segmentation accuracy. Experimental results demonstrate that the proposed approach is effective and promising in the field of fetal cerebellar segmentation in actual Ultrasound images. The average IoU value and mean Dice index reach 86.72% and 90.45%, respectively, which are 3.07% and 5.25% higher than that of the basic U-Net.
Collapse
Affiliation(s)
- Xin Shu
- School of Computer Science, Jiangsu University of Science and Technology, Zhenjiang, 212100, China.
| | - Yingyan Gu
- School of Computer Science, Jiangsu University of Science and Technology, Zhenjiang, 212100, China
| | - Xin Zhang
- Department of Medical Ultrasound, Affiliated Hospital of Jiangsu University, Zhenjiang, 212003, China.
| | - Chunlong Hu
- School of Computer Science, Jiangsu University of Science and Technology, Zhenjiang, 212100, China
| | - Ke Cheng
- School of Computer Science, Jiangsu University of Science and Technology, Zhenjiang, 212100, China
| |
Collapse
|
17
|
ECAU-Net: Efficient channel attention U-Net for fetal ultrasound cerebellum segmentation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103528] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
18
|
Fetal ultrasound image segmentation using dilated multi-scale-LinkNet. Int J Health Sci (Qassim) 2022. [DOI: 10.53730/ijhs.v6ns1.6047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Ultrasound imaging is routinely conducted for prenatal care in many countries to determine the health of the fetus, the pregnancy's progress, as well as the baby's due date. The intrinsic property of fetal images during different stages of pregnancy creates difficulty in automatic extraction of fetal head from ultrasound image data. The proposed work develops a deep learning model called Dilated Multi-scale-LinkNet for segmenting fetal skulls automatically from two dimensional ultrasound image data. The network is modeled to work with Link-Net since it offers better interpretation in biomedicine applications. Convolutional layers with dilations are added following the encoders. The Dilated convolution is used to expand the size of an image to prevent data loss. Training and evaluating the model is done using the HC18 grand challenge dataset. It contains 2D ultrasound images at different pregnancy stages. The results of experiments performed on an ultrasound images of women in different pregnancy stages. It reveals that we achieved 94.82% Dice score, 1.9 mm ADF, 0.72 DF and 2.02 HD when segmenting the fetal skull. Employing Dilated Multi-Scale-LinkNet improves the accuracy as well as all the evaluation parameters of the segmentation compared with the existing methods.
Collapse
|
19
|
Schilpzand M, Neff C, van Dillen J, van Ginneken B, Heskes T, de Korte C, van den Heuvel T. Automatic Placenta Localization From Ultrasound Imaging in a Resource-Limited Setting Using a Predefined Ultrasound Acquisition Protocol and Deep Learning. ULTRASOUND IN MEDICINE & BIOLOGY 2022; 48:663-674. [PMID: 35063289 DOI: 10.1016/j.ultrasmedbio.2021.12.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2020] [Revised: 11/22/2021] [Accepted: 12/02/2021] [Indexed: 06/14/2023]
Abstract
Placenta localization from obstetric 2-D ultrasound (US) imaging is unattainable for many pregnant women in low-income countries because of a severe shortage of trained sonographers. To address this problem, we present a method to automatically detect low-lying placenta or placenta previa from 2-D US imaging. Two-dimensional US data from 280 pregnant women were collected in Ethiopia using a standardized acquisition protocol and low-cost equipment. The detection method consists of two parts. First, 2-D US segmentation of the placenta is performed using a deep learning model with a U-Net architecture. Second, the segmentation is used to classify each placenta as either normal or a class including both low-lying placenta and placenta previa. The segmentation model was trained and tested on 6574 2-D US images, achieving a median test Dice coefficient of 0.84 (interquartile range = 0.23). The classifier achieved a sensitivity of 81% and a specificity of 82% on a holdout test set of 148 cases. Additionally, the model was found to segment in real time (19 ± 2 ms per 2-D US image) using a smartphone paired with a low-cost 2-D US device. This work illustrates the feasibility of using automated placenta localization in a resource-limited setting.
Collapse
Affiliation(s)
- Martijn Schilpzand
- Diagnostic Image Analysis Group, Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands; Medical Ultrasound Imaging Centre, Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands; Institute for Computing and Information Sciences, Radboud University, Nijmegen, The Netherlands.
| | - Chase Neff
- Medical Ultrasound Imaging Centre, Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Jeroen van Dillen
- Department of Obstetrics, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Bram van Ginneken
- Diagnostic Image Analysis Group, Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Tom Heskes
- Institute for Computing and Information Sciences, Radboud University, Nijmegen, The Netherlands
| | - Chris de Korte
- Medical Ultrasound Imaging Centre, Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands; Physics of Fluids Group, Technical Medical Center, University of Twente, Enschede, The Netherlands
| | - Thomas van den Heuvel
- Diagnostic Image Analysis Group, Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands; Medical Ultrasound Imaging Centre, Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands
| |
Collapse
|
20
|
Wu Y, Zhang Y, Zou X, Yuan Z, Hu W, Lu S, Sun X, Wu Y. Estimated date of delivery with electronic medical records by a hybrid GBDT-GRU model. Sci Rep 2022; 12:4892. [PMID: 35318360 PMCID: PMC8941136 DOI: 10.1038/s41598-022-08664-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Accepted: 02/08/2022] [Indexed: 11/24/2022] Open
Abstract
An accurate estimated date of delivery (EDD) helps pregnant women make adequate preparations before delivery and avoid the panic of parturition. EDD is normally derived from some formulates or estimated by doctors based on last menstruation period and ultrasound examinations. This study attempted to combine antenatal examinations and electronic medical records to develop a hybrid model based on Gradient Boosting Decision Tree and Gated Recurrent Unit (GBDT-GRU). Besides exploring the features that affect the EDD, GBDT-GRU model obtained the results by dynamic prediction of different stages. The mean square error (MSE) and coefficient of determination (R2) were used to compare the performance among the different prediction methods. In addition, we evaluated predictive performances of different prediction models by comparing the proportion of pregnant women under the error of different days. Experimental results showed that the performance indexes of hybrid GBDT-GRU model outperformed other prediction methods because it focuses on analyzing the time-series predictors of pregnancy. The results of this study are helpful for the development of guidelines for clinical delivery treatments, as it can assist clinicians in making correct decisions during obstetric examinations.
Collapse
Affiliation(s)
- Yina Wu
- Engineering Research Center of Mobile Health Management Ministry of Education, Hangzhou Normal University, Hangzhou, China
| | - Yichao Zhang
- Engineering Research Center of Mobile Health Management Ministry of Education, Hangzhou Normal University, Hangzhou, China
| | - Xu Zou
- Hangzhou Hele Tech. Co, Hangzhou, China
| | - Zhenming Yuan
- Engineering Research Center of Mobile Health Management Ministry of Education, Hangzhou Normal University, Hangzhou, China
| | | | - Sha Lu
- Hangzhou Women's Hospital, Hangzhou, China
| | - Xiaoyan Sun
- Engineering Research Center of Mobile Health Management Ministry of Education, Hangzhou Normal University, Hangzhou, China
| | - Yingfei Wu
- Engineering Research Center of Mobile Health Management Ministry of Education, Hangzhou Normal University, Hangzhou, China.
| |
Collapse
|
21
|
Lin M, He X, Guo H, He M, Zhang L, Xian J, Lei T, Xu Q, Zheng J, Feng J, Hao C, Yang Y, Wang N, Xie H. Use of real-time artificial intelligence in detection of abnormal image patterns in standard sonographic reference planes in screening for fetal intracranial malformations. ULTRASOUND IN OBSTETRICS & GYNECOLOGY : THE OFFICIAL JOURNAL OF THE INTERNATIONAL SOCIETY OF ULTRASOUND IN OBSTETRICS AND GYNECOLOGY 2022; 59:304-316. [PMID: 34940999 DOI: 10.1002/uog.24843] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/20/2021] [Revised: 11/02/2021] [Accepted: 11/25/2021] [Indexed: 06/14/2023]
Abstract
OBJECTIVES To develop and validate an artificial intelligence system, the Prenatal ultrasound diagnosis Artificial Intelligence Conduct System (PAICS), to detect different patterns of fetal intracranial abnormality in standard sonographic reference planes for screening for congenital central nervous system (CNS) malformations. METHODS Neurosonographic images from normal fetuses and fetuses with CNS malformations at 18-40 gestational weeks were retrieved from the databases of two tertiary hospitals in China and assigned randomly (ratio, 8:1:1) to training, fine-tuning and internal validation datasets to develop and evaluate the PAICS. The system was built based on a real-time convolutional neural network (CNN) algorithm, You Only Look Once, version 3 (YOLOv3). An image dataset from a third tertiary hospital was used to further validate, externally, the performance of the PAICS and to compare its performance with that of sonologists with different levels of expertise. Furthermore, a prospective video dataset was employed to evaluate the performance of the PAICS in a real-time scan scenario. The diagnostic accuracy, sensitivity, specificity and area under the receiver-operating-characteristics curve (AUC) were calculated to assess the performance of the PAICS and to compare this with the performance of sonologists with different levels of experience. RESULTS In total, 43 890 images from 16 297 pregnancies and 169 videos from 166 pregnancies were used to develop and validate the PAICS. The system achieved excellent performance in identifying 10 types of intracranial image pattern, with macro- and microaverage AUCs, respectively, of 0.933 (95% CI, 0.798-1.000) and 0.977 (95% CI, 0.970-0.985) for the internal validation image dataset, 0.902 (95% CI, 0.816-0.989) and 0.898 (95% CI, 0.885-0.911) for the external validation image dataset and 0.969 (95% CI, 0.886-1.000) and 0.981 (95% CI, 0.974-0.988) in the real-time scan setting. The performance of the PAICS was comparable to that of expert sonologists in terms of macro- and microaverage accuracy (P = 0.863 and P = 0.775, respectively), sensitivity (P = 0.883, P = 0.846) and AUC (P = 0.891, P = 0.788), but required significantly less time (0.025 s per image for PAICS vs 4.4 s for experts, P < 0.001). CONCLUSIONS Both in the image dataset and in the real-time scan setting, the PAICS achieved excellent diagnostic performance for various fetal CNS abnormalities. Its performance was comparable to that of experts, but it required less time. A CNN algorithm can be trained to detect fetal CNS abnormalities. The PAICS has the potential to be an effective and efficient tool in screening for fetal CNS malformations in clinical practice. © 2021 International Society of Ultrasound in Obstetrics and Gynecology.
Collapse
Affiliation(s)
- M Lin
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - X He
- Department of Ultrasound, Women and Children's Hospital affiliated to Xiamen University, Fujian, China
| | - H Guo
- Department of Ultrasound, Dongguan Maternal and Child Health Hospital, Dongguan, Guangdong, China
| | - M He
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - L Zhang
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - J Xian
- Guangzhou Aiyunji Information Technology Co., Ltd, Guangdong China & School of Computer Science and Engineering, South China University of Technology, Guangzhou, Guangdong, China
| | - T Lei
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Q Xu
- Department of Ultrasound, Dongguan Maternal and Child Health Hospital, Dongguan, Guangdong, China
| | - J Zheng
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - J Feng
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - C Hao
- Department of Medical Statistics & Sun Yat-sen Global Health Institute, School of Public Health and Institute of State Governance, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Y Yang
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - N Wang
- Guangzhou Aiyunji Information Technology Co., Ltd, Guangdong, China
| | - H Xie
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| |
Collapse
|
22
|
Torres HR, Morais P, Oliveira B, Birdir C, Rüdiger M, Fonseca JC, Vilaça JL. A review of image processing methods for fetal head and brain analysis in ultrasound images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 215:106629. [PMID: 35065326 DOI: 10.1016/j.cmpb.2022.106629] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Revised: 12/20/2021] [Accepted: 01/08/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Examination of head shape and brain during the fetal period is paramount to evaluate head growth, predict neurodevelopment, and to diagnose fetal abnormalities. Prenatal ultrasound is the most used imaging modality to perform this evaluation. However, manual interpretation of these images is challenging and thus, image processing methods to aid this task have been proposed in the literature. This article aims to present a review of these state-of-the-art methods. METHODS In this work, it is intended to analyze and categorize the different image processing methods to evaluate fetal head and brain in ultrasound imaging. For that, a total of 109 articles published since 2010 were analyzed. Different applications are covered in this review, namely analysis of head shape and inner structures of the brain, standard clinical planes identification, fetal development analysis, and methods for image processing enhancement. RESULTS For each application, the reviewed techniques are categorized according to their theoretical approach, and the more suitable image processing methods to accurately analyze the head and brain are identified. Furthermore, future research needs are discussed. Finally, topics whose research is lacking in the literature are outlined, along with new fields of applications. CONCLUSIONS A multitude of image processing methods has been proposed for fetal head and brain analysis. Summarily, techniques from different categories showed their potential to improve clinical practice. Nevertheless, further research must be conducted to potentiate the current methods, especially for 3D imaging analysis and acquisition and for abnormality detection.
Collapse
Affiliation(s)
- Helena R Torres
- Algoritmi Center, School of Engineering, University of Minho, Guimarães, Portugal; Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal; ICVS/3B's - PT Government Associate Laboratory, Braga/Guimarães, Portugal; 2Ai - School of Technology, IPCA, Barcelos, Portugal.
| | - Pedro Morais
- 2Ai - School of Technology, IPCA, Barcelos, Portugal
| | - Bruno Oliveira
- Algoritmi Center, School of Engineering, University of Minho, Guimarães, Portugal; Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal; ICVS/3B's - PT Government Associate Laboratory, Braga/Guimarães, Portugal; 2Ai - School of Technology, IPCA, Barcelos, Portugal
| | - Cahit Birdir
- Department of Gynecology and Obstetrics, University Hospital Carl Gustav Carus, TU Dresden, Germany; Saxony Center for Feto-Neonatal Health, TU Dresden, Germany
| | - Mario Rüdiger
- Department for Neonatology and Pediatric Intensive Care, University Hospital Carl Gustav Carus, TU Dresden, Germany
| | - Jaime C Fonseca
- Algoritmi Center, School of Engineering, University of Minho, Guimarães, Portugal
| | - João L Vilaça
- 2Ai - School of Technology, IPCA, Barcelos, Portugal
| |
Collapse
|
23
|
Shaddock L, Smith T. Potential for Use of Portable Ultrasound Devices in Rural and Remote Settings in Australia and Other Developed Countries: A Systematic Review. J Multidiscip Healthc 2022; 15:605-625. [PMID: 35378744 PMCID: PMC8976575 DOI: 10.2147/jmdh.s359084] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Accepted: 03/17/2022] [Indexed: 02/02/2023] Open
Abstract
Introduction Objective Methods Results Conclusion
Collapse
Affiliation(s)
- Liam Shaddock
- Medical Radiation Science, School of Health Sciences, The University of Newcastle, Newcastle, New South Wales, Australia
| | - Tony Smith
- The University of Newcastle Department of Rural Health & School of Health Sciences, The University of Newcastle, Newcastle, New South Wales, Australia
- Correspondence: Tony Smith, The University of Newcastle Department of Rural Health, C/- 69A High Street, Taree, Newcastle, NSW, Australia, Tel +61 466 440 037, Email
| |
Collapse
|
24
|
Płotka S, Klasa A, Lisowska A, Seliga-Siwecka J, Lipa M, Trzciński T, Sitek A. Deep learning fetal ultrasound video model match human observers in biometric measurements. Phys Med Biol 2022; 67. [PMID: 35051921 DOI: 10.1088/1361-6560/ac4d85] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2021] [Accepted: 01/20/2022] [Indexed: 11/11/2022]
Abstract
Objective.This work investigates the use of deep convolutional neural networks (CNN) to automatically perform measurements of fetal body parts, including head circumference, biparietal diameter, abdominal circumference and femur length, and to estimate gestational age and fetal weight using fetal ultrasound videos.Approach.We developed a novel multi-task CNN-based spatio-temporal fetal US feature extraction and standard plane detection algorithm (called FUVAI) and evaluated the method on 50 freehand fetal US video scans. We compared FUVAI fetal biometric measurements with measurements made by five experienced sonographers at two time points separated by at least two weeks. Intra- and inter-observer variabilities were estimated.Main results.We found that automated fetal biometric measurements obtained by FUVAI were comparable to the measurements performed by experienced sonographers The observed differences in measurement values were within the range of inter- and intra-observer variability. Moreover, analysis has shown that these differences were not statistically significant when comparing any individual medical expert to our model.Significance.We argue that FUVAI has the potential to assist sonographers who perform fetal biometric measurements in clinical settings by providing them with suggestions regarding the best measuring frames, along with automated measurements. Moreover, FUVAI is able perform these tasks in just a few seconds, which is a huge difference compared to the average of six minutes taken by sonographers. This is significant, given the shortage of medical experts capable of interpreting fetal ultrasound images in numerous countries.
Collapse
Affiliation(s)
- Szymon Płotka
- Sano Centre for Computational Medicine, Czarnowiejska 36, 30-054 Cracow, Poland.,Faculty of Electronics and Information Technology, Warsaw University of Technology, Nowowiejska 15/19, 00-665 Warsaw, Poland.,Fetai Health Ltd., Warsaw, Poland
| | | | - Aneta Lisowska
- Sano Centre for Computational Medicine, Czarnowiejska 36, 30-054 Cracow, Poland.,Poznan University of Technology, Piotrowo 3, 60-965 Poznan, Poland
| | | | - Michał Lipa
- 1st Department of Obstetrics and Gynecology, Medical University of Warsaw, Plac Starynkiewicza 1/3, 02-015 Warsaw, Poland
| | - Tomasz Trzciński
- Faculty of Electronics and Information Technology, Warsaw University of Technology, Nowowiejska 15/19, 00-665 Warsaw, Poland.,Jagiellonian University, Prof. Stanisława Łojosiewicza 6, 30-348 Cracow, Poland
| | - Arkadiusz Sitek
- Sano Centre for Computational Medicine, Czarnowiejska 36, 30-054 Cracow, Poland
| |
Collapse
|
25
|
Arroyo J, Marini TJ, Saavedra AC, Toscano M, Baran TM, Drennan K, Dozier A, Zhao YT, Egoavil M, Tamayo L, Ramos B, Castaneda B. No sonographer, no radiologist: New system for automatic prenatal detection of fetal biometry, fetal presentation, and placental location. PLoS One 2022; 17:e0262107. [PMID: 35139093 PMCID: PMC8827457 DOI: 10.1371/journal.pone.0262107] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Accepted: 12/17/2021] [Indexed: 02/06/2023] Open
Abstract
Ultrasound imaging is a vital component of high-quality Obstetric care. In rural and under-resourced communities, the scarcity of ultrasound imaging results in a considerable gap in the healthcare of pregnant mothers. To increase access to ultrasound in these communities, we developed a new automated diagnostic framework operated without an experienced sonographer or interpreting provider for assessment of fetal biometric measurements, fetal presentation, and placental position. This approach involves the use of a standardized volume sweep imaging (VSI) protocol based solely on external body landmarks to obtain imaging without an experienced sonographer and application of a deep learning algorithm (U-Net) for diagnostic assessment without a radiologist. Obstetric VSI ultrasound examinations were performed in Peru by an ultrasound operator with no previous ultrasound experience who underwent 8 hours of training on a standard protocol. The U-Net was trained to automatically segment the fetal head and placental location from the VSI ultrasound acquisitions to subsequently evaluate fetal biometry, fetal presentation, and placental position. In comparison to diagnostic interpretation of VSI acquisitions by a specialist, the U-Net model showed 100% agreement for fetal presentation (Cohen's κ 1 (p<0.0001)) and 76.7% agreement for placental location (Cohen's κ 0.59 (p<0.0001)). This corresponded to 100% sensitivity and specificity for fetal presentation and 87.5% sensitivity and 85.7% specificity for anterior placental location. The method also achieved a low relative error of 5.6% for biparietal diameter and 7.9% for head circumference. Biometry measurements corresponded to estimated gestational age within 2 weeks of those assigned by standard of care examination with up to 89% accuracy. This system could be deployed in rural and underserved areas to provide vital information about a pregnancy without a trained sonographer or interpreting provider. The resulting increased access to ultrasound imaging and diagnosis could improve disparities in healthcare delivery in under-resourced areas.
Collapse
Affiliation(s)
- Junior Arroyo
- Laboratorio de Imágenes Médicas, Departamento de Ingeniería, Pontificia Universidad Católica del Perú, Lima, Peru
| | - Thomas J. Marini
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, New York, United States of America
| | - Ana C. Saavedra
- Laboratorio de Imágenes Médicas, Departamento de Ingeniería, Pontificia Universidad Católica del Perú, Lima, Peru
| | - Marika Toscano
- Department of Obstetrics and Gynecology, University of Rochester Medical Center, Rochester, New York, United States of America
| | - Timothy M. Baran
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, New York, United States of America
| | - Kathryn Drennan
- Department of Obstetrics and Gynecology, University of Rochester Medical Center, Rochester, New York, United States of America
| | - Ann Dozier
- Department of Public Health, University of Rochester Medical Center, Rochester, New York, United States of America
| | - Yu Tina Zhao
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, New York, United States of America
| | - Miguel Egoavil
- Research & Development, Medical Innovation & Technology, Lima, Perú
| | - Lorena Tamayo
- Research & Development, Medical Innovation & Technology, Lima, Perú
| | - Berta Ramos
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, New York, United States of America
| | - Benjamin Castaneda
- Laboratorio de Imágenes Médicas, Departamento de Ingeniería, Pontificia Universidad Católica del Perú, Lima, Peru
| |
Collapse
|
26
|
He F, Wang Y, Xiu Y, Zhang Y, Chen L. Artificial Intelligence in Prenatal Ultrasound Diagnosis. Front Med (Lausanne) 2021; 8:729978. [PMID: 34977053 PMCID: PMC8716504 DOI: 10.3389/fmed.2021.729978] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Accepted: 11/29/2021] [Indexed: 12/12/2022] Open
Abstract
The application of artificial intelligence (AI) technology to medical imaging has resulted in great breakthroughs. Given the unique position of ultrasound (US) in prenatal screening, the research on AI in prenatal US has practical significance with its application to prenatal US diagnosis improving work efficiency, providing quantitative assessments, standardizing measurements, improving diagnostic accuracy, and automating image quality control. This review provides an overview of recent studies that have applied AI technology to prenatal US diagnosis and explains the challenges encountered in these applications.
Collapse
Affiliation(s)
| | | | | | | | - Lizhu Chen
- Department of Ultrasound, Shengjing Hospital of China Medical University, Shenyang, China
| |
Collapse
|
27
|
Chen Z, Liu Z, Du M, Wang Z. Artificial Intelligence in Obstetric Ultrasound: An Update and Future Applications. Front Med (Lausanne) 2021; 8:733468. [PMID: 34513890 PMCID: PMC8429607 DOI: 10.3389/fmed.2021.733468] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 08/04/2021] [Indexed: 01/04/2023] Open
Abstract
Artificial intelligence (AI) can support clinical decisions and provide quality assurance for images. Although ultrasonography is commonly used in the field of obstetrics and gynecology, the use of AI is still in a stage of infancy. Nevertheless, in repetitive ultrasound examinations, such as those involving automatic positioning and identification of fetal structures, prediction of gestational age (GA), and real-time image quality assurance, AI has great potential. To realize its application, it is necessary to promote interdisciplinary communication between AI developers and sonographers. In this review, we outlined the benefits of AI technology in obstetric ultrasound diagnosis by optimizing image acquisition, quantification, segmentation, and location identification, which can be helpful for obstetric ultrasound diagnosis in different periods of pregnancy.
Collapse
Affiliation(s)
- Zhiyi Chen
- The First Affiliated Hospital, Medical Imaging Centre, Hengyang Medical School, University of South China, Hengyang, China.,Institute of Medical Imaging, University of South China, Hengyang, China
| | - Zhenyu Liu
- The First Affiliated Hospital, Medical Imaging Centre, Hengyang Medical School, University of South China, Hengyang, China
| | - Meng Du
- Institute of Medical Imaging, University of South China, Hengyang, China
| | - Ziyao Wang
- The First Affiliated Hospital, Medical Imaging Centre, Hengyang Medical School, University of South China, Hengyang, China
| |
Collapse
|
28
|
Self A, Papageorghiou AT. Ultrasound Diagnosis of the Small and Large Fetus. Obstet Gynecol Clin North Am 2021; 48:339-357. [PMID: 33972070 DOI: 10.1016/j.ogc.2021.03.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Antenatal imaging is crucial in the management of high-risk pregnancies. Accurate dating relies on acquisition of reliable and reproducible ultrasound images and measurements. Quality image acquisition is necessary for assessing fetal growth and performing Doppler measurements to help diagnose pregnancy complications, stratify risk, and guide management. Further research is needed to ascertain whether current methods for estimating fetal weight can be improved with 3-dimensional ultrasound or magnetic resonance imaging; optimize dating with late initiation of prenatal care; minimize under-diagnosis of fetal growth restriction; and identify the best strategies to make ultrasound more available in low-income and middle-income countries.
Collapse
Affiliation(s)
- Alice Self
- Nuffield Department of Women's and Reproductive Health, University of Oxford, Oxford, UK
| | - Aris T Papageorghiou
- Nuffield Department of Women's and Reproductive Health, University of Oxford, Oxford, UK.
| |
Collapse
|
29
|
Komatsu M, Sakai A, Dozen A, Shozu K, Yasutomi S, Machino H, Asada K, Kaneko S, Hamamoto R. Towards Clinical Application of Artificial Intelligence in Ultrasound Imaging. Biomedicines 2021; 9:720. [PMID: 34201827 PMCID: PMC8301304 DOI: 10.3390/biomedicines9070720] [Citation(s) in RCA: 37] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2021] [Revised: 06/13/2021] [Accepted: 06/18/2021] [Indexed: 12/12/2022] Open
Abstract
Artificial intelligence (AI) is being increasingly adopted in medical research and applications. Medical AI devices have continuously been approved by the Food and Drug Administration in the United States and the responsible institutions of other countries. Ultrasound (US) imaging is commonly used in an extensive range of medical fields. However, AI-based US imaging analysis and its clinical implementation have not progressed steadily compared to other medical imaging modalities. The characteristic issues of US imaging owing to its manual operation and acoustic shadows cause difficulties in image quality control. In this review, we would like to introduce the global trends of medical AI research in US imaging from both clinical and basic perspectives. We also discuss US image preprocessing, ingenious algorithms that are suitable for US imaging analysis, AI explainability for obtaining informed consent, the approval process of medical AI devices, and future perspectives towards the clinical application of AI-based US diagnostic support technologies.
Collapse
Affiliation(s)
- Masaaki Komatsu
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Akira Sakai
- Artificial Intelligence Laboratory, Research Unit, Fujitsu Research, Fujitsu Ltd., 4-1-1 Kamikodanaka, Nakahara-ku, Kawasaki, Kanagawa 211-8588, Japan; (A.S.); (S.Y.)
- RIKEN AIP—Fujitsu Collaboration Center, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
- Biomedical Science and Engineering Track, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Ai Dozen
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Kanto Shozu
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Suguru Yasutomi
- Artificial Intelligence Laboratory, Research Unit, Fujitsu Research, Fujitsu Ltd., 4-1-1 Kamikodanaka, Nakahara-ku, Kawasaki, Kanagawa 211-8588, Japan; (A.S.); (S.Y.)
- RIKEN AIP—Fujitsu Collaboration Center, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Hidenori Machino
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Ken Asada
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Syuzo Kaneko
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Ryuji Hamamoto
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
- Biomedical Science and Engineering Track, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| |
Collapse
|
30
|
Recognition of Thyroid Ultrasound Standard Plane Images Based on Residual Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:5598001. [PMID: 34188673 PMCID: PMC8192196 DOI: 10.1155/2021/5598001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/17/2021] [Revised: 04/27/2021] [Accepted: 05/14/2021] [Indexed: 01/22/2023]
Abstract
Ultrasound is one of the critical methods for diagnosis and treatment in thyroid examination. In clinical application, many reasons, such as large outpatient traffic, time-consuming training of sonographers, and uneven professional level of physicians, often cause irregularities during the ultrasonic examination, leading to misdiagnosis or missed diagnosis. In order to standardize the thyroid ultrasound examination process, this paper proposes using a deep learning method based on residual network to recognize the Thyroid Ultrasound Standard Plane (TUSP). At first, referring to multiple relevant guidelines, eight TUSP were determined with the advice of clinical ultrasound experts. A total of 5,500 TUSP images of 8 categories were collected with the approval and review of the Ethics Committee and the patient's informed consent. Then, after desensitizing and filling the images, the 18-layer residual network model (ResNet-18) was trained for TUSP image recognition, and five-fold cross-validation was performed. Finally, through indicators like accuracy rate, we compared the recognition effect of other mainstream deep convolutional neural network models. Experimental results showed that ResNet-18 has the best recognition effect on TUSP images with an average accuracy rate of 91.07%. The average macro precision, average macro recall, and average macro F1-score are 91.39%, 91.34%, and 91.30%, respectively. It proves that the deep learning method based on residual network can effectively recognize TUSP images, which is expected to standardize clinical thyroid ultrasound examination and reduce misdiagnosis and missed diagnosis.
Collapse
|
31
|
Dougherty A, Kasten M, DeSarno M, Badger G, Streeter M, Jones DC, Sussman B, DeStigter K. Validation of a Telemedicine Quality Assurance Method for Point-of-Care Obstetric Ultrasound Used in Low-Resource Settings. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2021; 40:529-540. [PMID: 32770709 DOI: 10.1002/jum.15429] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/08/2020] [Revised: 05/28/2020] [Accepted: 06/04/2020] [Indexed: 06/11/2023]
Abstract
OBJECTIVES A remote quality assurance and improvement protocol for point-of-care obstetric ultrasound in low-resource areas was validated against the standard of care for obstetric ultrasound in the United States. METHODS Compressed movie clip ultrasound images (obstetric sweep protocol) obtained by minimally trained personnel were read and interpreted by physicians with training in obstetric ultrasound. Observed findings were compared among readers and between each reader and the gold standard ultrasound scan report. Descriptive statistics were used for the analysis. RESULTS The agreements among readers and between readers and the gold standard, for the anterior and posterior variables of the placental location were excellent, with Cohen κ values of 0.81 to 0.88 and 0.77 to 0.9, respectively. Cohen κ values were slight or slight/fair for other placental locations (left, right, fundal, and low), and the sensitivity and specificity ranged widely. The agreement among readers and between readers and the gold standard for fetal number comparisons was also excellent, with Cohen κ values ranging from 0.82 to 1, sensitivity from 0.83 to 1, and specificity from 0.99 to 1. The agreement among readers for fetal presentation comparisons, according to the Cohen κ, ranged from 0.79 to 0.85 and between readers and the gold standard had values of 0.43 to 0.49. For biometric parameters and estimated gestational age calculations based on these parameters, inter-reader reliability ranged from 0.79 to 0.85 for all parameters except femur length. Greater than 94% of obstetric sweep protocol ultrasound ages were within 7 days of the corresponding gold standard age. CONCLUSIONS Movie clip ultrasound images provided adequate information for remote readers to reliably determine the placental location, fetal number, fetal presentation, and pregnancy dating.
Collapse
Affiliation(s)
- Anne Dougherty
- Department of Obstetrics, Gynecology, and Reproductive Sciences, University of Vermont Larner College of Medicine, Burlington, Vermont, USA
| | | | - Michael DeSarno
- Department of Medical Biostatistics and Bioinformatics, University of Vermont Larner College of Medicine, Burlington, Vermont, USA
| | - Gary Badger
- Department of Medical Biostatistics and Bioinformatics, University of Vermont Larner College of Medicine, Burlington, Vermont, USA
| | - Mary Streeter
- Department of Radiology, University of Vermont Larner College of Medicine, Burlington, Vermont, USA
| | - David C Jones
- Department of Obstetrics, Gynecology, and Reproductive Sciences, University of Vermont Larner College of Medicine, Burlington, Vermont, USA
| | - Betsy Sussman
- Department of Radiology, University of Vermont Larner College of Medicine, Burlington, Vermont, USA
| | - Kristen DeStigter
- Department of Radiology, University of Vermont Larner College of Medicine, Burlington, Vermont, USA
| |
Collapse
|
32
|
Prieto JC, Shah H, Rosenbaum AJ, Jiang X, Musonda P, Price JT, Stringer EM, Vwalika B, Stamilio DM, Stringer JSA. An automated framework for image classification and segmentation of fetal ultrasound images for gestational age estimation. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2021; 11596:115961N. [PMID: 33935344 PMCID: PMC8086527 DOI: 10.1117/12.2582243] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Accurate assessment of fetal gestational age (GA) is critical to the clinical management of pregnancy. Industrialized countries rely upon obstetric ultrasound (US) to make this estimate. In low- and middle- income countries, automatic measurement of fetal structures using a low-cost obstetric US may assist in establishing GA without the need for skilled sonographers. In this report, we leverage a large database of obstetric US images acquired, stored and annotated by expert sonographers to train algorithms to classify, segment, and measure several fetal structures: biparietal diameter (BPD), head circumference (HC), crown rump length (CRL), abdominal circumference (AC), and femur length (FL). We present a technique for generating raw images suitable for model training by removing caliper and text annotation and describe a fully automated pipeline for image classification, segmentation, and structure measurement to estimate the GA. The resulting framework achieves an average accuracy of 93% in classification tasks, a mean Intersection over Union accuracy of 0.91 during segmentation tasks, and a mean measurement error of 1.89 centimeters, finally leading to a 1.4 day mean average error in the predicted GA compared to expert sonographer GA estimate using the Hadlock equation.
Collapse
Affiliation(s)
- Juan C. Prieto
- Department of Psychiatry, University of North Carolina at Chapel Hill
| | - Hina Shah
- Department of Psychiatry, University of North Carolina at Chapel Hill
| | - Alan J. Rosenbaum
- Department of Obstetrics and Gynecology, University of North Carolina at Chapel Hill
| | - Xiaoning Jiang
- Department of Mechanical and Aerospace Engineering, North Carolina State University
| | | | - Joan T. Price
- Department of Obstetrics and Gynecology, University of North Carolina at Chapel Hill
| | - Elizabeth M. Stringer
- Department of Obstetrics and Gynecology, University of North Carolina at Chapel Hill
| | - Bellington Vwalika
- Department of Obstetrics and Gynaecology, University of Zambia School of Medicine
| | - David M. Stamilio
- Department of Obstetrics and Gynecology, Wake Forest University School of Medicine
| | | |
Collapse
|
33
|
Xie HN, Wang N, He M, Zhang LH, Cai HM, Xian JB, Lin MF, Zheng J, Yang YZ. Using deep-learning algorithms to classify fetal brain ultrasound images as normal or abnormal. ULTRASOUND IN OBSTETRICS & GYNECOLOGY : THE OFFICIAL JOURNAL OF THE INTERNATIONAL SOCIETY OF ULTRASOUND IN OBSTETRICS AND GYNECOLOGY 2020; 56:579-587. [PMID: 31909548 DOI: 10.1002/uog.21967] [Citation(s) in RCA: 58] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/11/2019] [Revised: 11/28/2019] [Accepted: 12/23/2019] [Indexed: 06/10/2023]
Abstract
OBJECTIVES To evaluate the feasibility of using deep-learning algorithms to classify as normal or abnormal sonographic images of the fetal brain obtained in standard axial planes. METHODS We included in the study images retrieved from a large hospital database from 10 251 normal and 2529 abnormal pregnancies. Abnormal cases were confirmed by neonatal ultrasound, follow-up examination or autopsy. After a series of pretraining data processing steps, 15 372 normal and 14 047 abnormal fetal brain images in standard axial planes were obtained. These were divided into training and test datasets (at case level rather than image level), at a ratio of approximately 8:2. The training data were used to train the algorithms for three purposes: performance of image segmentation along the fetal skull, classification of the image as normal or abnormal and localization of the lesion. The accuracy was then tested on the test datasets, with performance of segmentation being assessed using precision, recall and Dice's coefficient (DICE), calculated to measure the extent of overlap between human-labeled and machine-segmented regions. We assessed classification accuracy by calculating the sensitivity and specificity for abnormal images. Additionally, for 2491 abnormal images, we determined how well each lesion had been localized by overlaying heat maps created by an algorithm on the segmented ultrasound images; an expert judged these in terms of how satisfactory was the lesion localization by the algorithm, classifying this as having been done precisely, closely or irrelevantly. RESULTS Segmentation precision, recall and DICE were 97.9%, 90.9% and 94.1%, respectively. For classification, the overall accuracy was 96.3%. The sensitivity and specificity for identification of abnormal images were 96.9% and 95.9%, respectively, and the area under the receiver-operating-characteristics curve was 0.989 (95% CI, 0.986-0.991). The algorithms located lesions precisely in 61.6% (1535/2491) of the abnormal images, closely in 24.6% (614/2491) and irrelevantly in 13.7% (342/2491). CONCLUSIONS Deep-learning algorithms can be trained for segmentation and classification of normal and abnormal fetal brain ultrasound images in standard axial planes and can provide heat maps for lesion localization. This study lays the foundation for further research on the differential diagnosis of fetal intracranial abnormalities. Copyright © 2020 ISUOG. Published by John Wiley & Sons Ltd.
Collapse
Affiliation(s)
- H N Xie
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - N Wang
- Guangzhou Aiyunji Information Technology Co., Ltd, Guangdong, China
| | - M He
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - L H Zhang
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - H M Cai
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, Guangdong, China
| | - J B Xian
- Guangzhou Aiyunji Information Technology Co., Ltd, Guangdong, China
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, Guangdong, China
| | - M F Lin
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - J Zheng
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Y Z Yang
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| |
Collapse
|
34
|
Kiyasseh D, Zhu T, Clifton D. The Promise of Clinical Decision Support Systems Targetting Low-Resource Settings. IEEE Rev Biomed Eng 2020; 15:354-371. [PMID: 32813662 DOI: 10.1109/rbme.2020.3017868] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Low-resource clinical settings are plagued by low physician-to-patient ratios and a shortage of high-quality medical expertise and infrastructure. Together, these phenomena lead to over-burdened healthcare systems that under-serve the needs of the community. Alleviating this burden can be undertaken by the introduction of clinical decision support systems (CDSSs); systems that support stakeholders (ranging from physicians to patients) within the clinical setting in their day-to-day activities. Such systems, which have proven to be effective in the developed world, remain to be under-explored in low-resource settings. This review attempts to summarize the research focused on clinical decision support systems that either target stakeholders within low-resource clinical settings or diseases commonly found in such environments. When categorizing our findings according to disease applications, we find that CDSSs are predominantly focused on dealing with bacterial infections and maternal care, do not leverage deep learning, and have not been evaluated prospectively. Together, these highlight the need for increased research in this domain in order to impact a diverse set of medical conditions and ultimately improve patient outcomes.
Collapse
|
35
|
Hu R, Singla R, Yan R, Mayer C, Rohling RN. Automated Placenta Segmentation with a Convolutional Neural Network Weighted by Acoustic Shadow Detection. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:6718-6723. [PMID: 31947383 DOI: 10.1109/embc.2019.8857448] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Placental assessment through routine obstetrical ultrasound is often limited to documenting its location and ruling out placenta previa. However, many obstetrical complications originate from abnormal focal or global placental development. Technical difficulties in assessing the placenta as well as a lack of established objective criteria to classify echotexture are barriers to diagnosis of pathology by ultrasound imaging. As a first step towards the development of a computer aided placental assessment tool, we developed a fully automated method for placental segmentation using a convolutional neural network. The network contains a novel layer weighted by automated acoustic shadow detection to recognize artifacts specific to ultrasound. In order to develop a detection algorithm usable in different imaging scenarios, we acquired a dataset containing 1364 fetal ultrasound images from 247 patients acquired over 47 months was taken with different machines, operators, and at a range of gestational ages. Mean Dice coefficients for automated segmentation on the full dataset with and without the acoustic shadow detection layer were 0.92±0.04 and 0.91±0.03 when comparing to manual segmentation. Mean Dice coefficients on the subset of images containing acoustic shadows with and without acoustic shadow detection were 0.87±0.04 and 0.75±0.05. The method requires no user input to tune the detection. The automated placenta segmentation method can serve as a preprocessing step for further image analysis in artificial intelligence methods requiring large scale data processing of placental images.
Collapse
|
36
|
Xie B, Lei T, Wang N, Cai H, Xian J, He M, Zhang L, Xie H. Computer-aided diagnosis for fetal brain ultrasound images using deep convolutional neural networks. Int J Comput Assist Radiol Surg 2020; 15:1303-1312. [PMID: 32488568 DOI: 10.1007/s11548-020-02182-3] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2019] [Accepted: 04/23/2020] [Indexed: 11/29/2022]
Abstract
PURPOSE Fetal brain abnormalities are some of the most common congenital malformations that may associated with syndromic and chromosomal malformations, and could lead to neurodevelopmental delay and mental retardation. Early prenatal detection of brain abnormalities is essential for informing clinical management pathways and consulting for parents. The purpose of this research is to develop computer-aided diagnosis algorithms for five common fetal brain abnormalities, which may provide assistance to doctors for brain abnormalities detection in antenatal neurosonographic assessment. METHODS We applied a classifier to classify images of fetal brain standard planes (transventricular and transcerebellar) as normal or abnormal. The classifier was trained by image-level labeled images. In the first step, craniocerebral regions were segmented from the ultrasound images. Then, these segmentations were classified into four categories. Last, the lesions in the abnormal images were localized by class activation mapping. RESULTS We evaluated our algorithms on real-world clinical datasets of fetal brain ultrasound images. We observed that the proposed method achieved a Dice score of 0.942 on craniocerebral region segmentation, an average F1-score of 0.96 on classification and an average mean IOU of 0.497 on lesion localization. CONCLUSION We present computer-aided diagnosis algorithms for fetal brain ultrasound images based on deep convolutional neural networks. Our algorithms could be potentially applied in diagnosis assistance and are expected to help junior doctors in making clinical decision and reducing false negatives of fetal brain abnormalities.
Collapse
Affiliation(s)
- Baihong Xie
- South China University of Technology, Guangzhou, China
| | - Ting Lei
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Zhongshan Er Road 58, Guangzhou, 510080, Guangdong, China
| | - Nan Wang
- Guangzhou Aiyunji Information Technology Co., Ltd., Guangzhou, China
| | - Hongmin Cai
- South China University of Technology, Guangzhou, China
| | - Jianbo Xian
- South China University of Technology, Guangzhou, China.,Guangzhou Aiyunji Information Technology Co., Ltd., Guangzhou, China
| | - Miao He
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Zhongshan Er Road 58, Guangzhou, 510080, Guangdong, China
| | - Lihe Zhang
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Zhongshan Er Road 58, Guangzhou, 510080, Guangdong, China
| | - Hongning Xie
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Zhongshan Er Road 58, Guangzhou, 510080, Guangdong, China.
| |
Collapse
|
37
|
Schwalbe N, Wahl B. Artificial intelligence and the future of global health. Lancet 2020; 395:1579-1586. [PMID: 32416782 PMCID: PMC7255280 DOI: 10.1016/s0140-6736(20)30226-9] [Citation(s) in RCA: 224] [Impact Index Per Article: 56.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/21/2019] [Revised: 01/21/2020] [Accepted: 01/22/2020] [Indexed: 02/07/2023]
Abstract
Concurrent advances in information technology infrastructure and mobile computing power in many low and middle-income countries (LMICs) have raised hopes that artificial intelligence (AI) might help to address challenges unique to the field of global health and accelerate achievement of the health-related sustainable development goals. A series of fundamental questions have been raised about AI-driven health interventions, and whether the tools, methods, and protections traditionally used to make ethical and evidence-based decisions about new technologies can be applied to AI. Deployment of AI has already begun for a broad range of health issues common to LMICs, with interventions focused primarily on communicable diseases, including tuberculosis and malaria. Types of AI vary, but most use some form of machine learning or signal processing. Several types of machine learning methods are frequently used together, as is machine learning with other approaches, most often signal processing. AI-driven health interventions fit into four categories relevant to global health researchers: (1) diagnosis, (2) patient morbidity or mortality risk assessment, (3) disease outbreak prediction and surveillance, and (4) health policy and planning. However, much of the AI-driven intervention research in global health does not describe ethical, regulatory, or practical considerations required for widespread use or deployment at scale. Despite the field remaining nascent, AI-driven health interventions could lead to improved health outcomes in LMICs. Although some challenges of developing and deploying these interventions might not be unique to these settings, the global health community will need to work quickly to establish guidelines for development, testing, and use, and develop a user-driven research agenda to facilitate equitable and ethical use.
Collapse
Affiliation(s)
- Nina Schwalbe
- Heilbrunn Department of Population and Family Health, Columbia Mailman School of Public Health, New York, NY, USA; Spark Street Advisors, New York, NY, USA.
| | - Brian Wahl
- Spark Street Advisors, New York, NY, USA; Department of International Health, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA
| |
Collapse
|
38
|
Geary M, Chibwesha C, Stringer E. Contemporary Issues in Women's Health. Int J Gynaecol Obstet 2020; 146:36-38. [PMID: 31173357 DOI: 10.1002/ijgo.12845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Should we be performing routine salpingectomy as a preventative approach to reduce the risk of ovarian cancer?
Machine Learning in Women's Health.
Women who undergo cesarean delivery in many parts of Africa are at high risk of severe maternal morbidity and mortality.
Collapse
Affiliation(s)
- Michael Geary
- Department of Obstetrics and Gynecology, Rotunda Hospital, Dublin, Ireland
| | - Carla Chibwesha
- Institute for Global Health and Infectious Diseases, University of North Carolina, Chapel Hill, NC, USA
| | - Elizabeth Stringer
- Obstetrics and Gynecology, University of North Carolina, Chapel Hill, NC, USA
| |
Collapse
|
39
|
Garcia-Canadilla P, Sanchez-Martinez S, Crispi F, Bijnens B. Machine Learning in Fetal Cardiology: What to Expect. Fetal Diagn Ther 2020; 47:363-372. [PMID: 31910421 DOI: 10.1159/000505021] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2019] [Accepted: 11/25/2019] [Indexed: 11/19/2022]
Abstract
In fetal cardiology, imaging (especially echocardiography) has demonstrated to help in the diagnosis and monitoring of fetuses with a compromised cardiovascular system potentially associated with several fetal conditions. Different ultrasound approaches are currently used to evaluate fetal cardiac structure and function, including conventional 2-D imaging and M-mode and tissue Doppler imaging among others. However, assessment of the fetal heart is still challenging mainly due to involuntary movements of the fetus, the small size of the heart, and the lack of expertise in fetal echocardiography of some sonographers. Therefore, the use of new technologies to improve the primary acquired images, to help extract measurements, or to aid in the diagnosis of cardiac abnormalities is of great importance for optimal assessment of the fetal heart. Machine leaning (ML) is a computer science discipline focused on teaching a computer to perform tasks with specific goals without explicitly programming the rules on how to perform this task. In this review we provide a brief overview on the potential of ML techniques to improve the evaluation of fetal cardiac function by optimizing image acquisition and quantification/segmentation, as well as aid in improving the prenatal diagnoses of fetal cardiac remodeling and abnormalities.
Collapse
Affiliation(s)
- Patricia Garcia-Canadilla
- Institut d'Investigacions Biomèdiques August Pi i Sunyer, Barcelona, Spain, .,Institute of Cardiovascular Science, University College London, London, United Kingdom,
| | | | - Fatima Crispi
- Institut d'Investigacions Biomèdiques August Pi i Sunyer, Barcelona, Spain.,Fetal Medicine Research Center, BCNatal - Barcelona Center for Maternal-Fetal and Neonatal Medicine (Hospital Clínic and Hospital Sant Joan de Déu), Institut Clínic de Ginecologia Obstetricia i Neonatologia, Centre for Biomedical Research on Rare Diseases (CIBER-ER), Barcelona, Spain
| | - Bart Bijnens
- Institut d'Investigacions Biomèdiques August Pi i Sunyer, Barcelona, Spain.,Department of Cardiovascular Sciences, KU Leuven, Leuven, Belgium.,ICREA, Barcelona, Spain
| |
Collapse
|
40
|
Eroglu H, Orgul G, Avcı E, Altınboga O, Karakoc G, Yucel A. Comparison of automated vs. manual measurement to estimate fetal weight in isolated polyhydramnios. J Perinat Med 2019; 47:592-597. [PMID: 31141491 DOI: 10.1515/jpm-2019-0083] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/12/2019] [Accepted: 04/22/2019] [Indexed: 01/24/2023]
Abstract
Objective To understand the impact of the measurement method to predict actual birthweight in pregnancies complicated with isolated polyhydramnios in the third trimester. Methods A prospective study was conducted with 60 pregnant women between the 37th and 40th weeks of gestation. Routine biometric measurements were obtained by two-dimensional (2D) ultrasonography. When a satisfactory image was obtained, the image was frozen to get two measurements. First, calipers were placed to get the manual measurement. Then automated measurement was captured by the ultrasonography machine in the same image. The fetal weight was estimated by using the Hadlock II formula. Results The mean difference was found to be 0.03, -0.77, -0.02 and 0.17 for biparietal diameter (BPD), head circumference (HC), abdominal circumference (AC) and femur length (FL), respectively. Pearson's correlation coefficient between automated and manual estimated fetal weights (EFWs) and the actual birthweight was 0.919 and 0.796, respectively. The mean difference between actual and manual EFW measurement values was 46.16 ± 363.81 g (range between -745 g and 685 g) (P = 0.330). Also, the mean difference between actual and automated EFW measurement values was found to be 31.98 ± 218.65 g (range between -378 g and 742 g) (P = 0.262). The Bland-Altman test results have shown that, 666 g lower or 759 g higher values were obtained when the measurement was performed manually. On the other hand, EFW results were 396 g lower or 460 g higher than the actual birthweight with automated measurement tools. Conclusion The accuracy rate of fetal weight estimation with ultrasonography is high for both automated and manual measurements. Automated tools have a higher success to predict the EFW.
Collapse
Affiliation(s)
- Hasan Eroglu
- Department of Perinatology, Etlik Zubeyde Hanim Women's Health Care, Training and Research Hospital, University of Health Sciences, Ankara, Turkey
| | - Gokcen Orgul
- Department of Perinatology, Etlik Zubeyde Hanim Women's Health Care, Training and Research Hospital, University of Health Sciences, Ankara, Turkey
| | - Emine Avcı
- Department of Communicable Diseases, General Directorate of Public Health, Ankara, Turkey
| | - Orhan Altınboga
- Department of Perinatology, Etlik Zubeyde Hanim Women's Health Care, Training and Research Hospital, University of Health Sciences, Ankara, Turkey
| | - Gokhan Karakoc
- Department of Perinatology, Etlik Zubeyde Hanim Women's Health Care, Training and Research Hospital, University of Health Sciences, Ankara, Turkey
| | - Aykan Yucel
- Department of Perinatology, Etlik Zubeyde Hanim Women's Health Care, Training and Research Hospital, University of Health Sciences, Ankara, Turkey
| |
Collapse
|