1
|
Lasala A, Fiorentino MC, Bandini A, Moccia S. FetalBrainAwareNet: Bridging GANs with anatomical insight for fetal ultrasound brain plane synthesis. Comput Med Imaging Graph 2024; 116:102405. [PMID: 38824716 DOI: 10.1016/j.compmedimag.2024.102405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Revised: 04/25/2024] [Accepted: 05/22/2024] [Indexed: 06/04/2024]
Abstract
Over the past decade, deep-learning (DL) algorithms have become a promising tool to aid clinicians in identifying fetal head standard planes (FHSPs) during ultrasound (US) examination. However, the adoption of these algorithms in clinical settings is still hindered by the lack of large annotated datasets. To overcome this barrier, we introduce FetalBrainAwareNet, an innovative framework designed to synthesize anatomically accurate images of FHSPs. FetalBrainAwareNet introduces a cutting-edge approach that utilizes class activation maps as a prior in its conditional adversarial training process. This approach fosters the presence of the specific anatomical landmarks in the synthesized images. Additionally, we investigate specialized regularization terms within the adversarial training loss function to control the morphology of the fetal skull and foster the differentiation between the standard planes, ensuring that the synthetic images faithfully represent real US scans in both structure and overall appearance. The versatility of our FetalBrainAwareNet framework is highlighted by its ability to generate high-quality images of three predominant FHSPs using a singular, integrated framework. Quantitative (Fréchet inception distance of 88.52) and qualitative (t-SNE) results suggest that our framework generates US images with greater variability compared to state-of-the-art methods. By using the synthetic images generated with our framework, we increase the accuracy of FHSP classifiers by 3.2% compared to training the same classifiers solely with real acquisitions. These achievements suggest that using our synthetic images to increase the training set could provide benefits to enhance the performance of DL algorithms for FHSPs classification that could be integrated in real clinical scenarios.
Collapse
Affiliation(s)
- Angelo Lasala
- The BioRobotics Institute and Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, Pisa, Italy.
| | | | - Andrea Bandini
- The BioRobotics Institute and Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, Pisa, Italy; Health Science Interdisciplinary Research Center, Scuola Superiore Sant'Anna, Pisa, Italy
| | - Sara Moccia
- The BioRobotics Institute and Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, Pisa, Italy
| |
Collapse
|
2
|
Degala SKB, Tewari RP, Kamra P, Kasiviswanathan U, Pandey R. Segmentation and Estimation of Fetal Biometric Parameters using an Attention Gate Double U-Net with Guided Decoder Architecture. Comput Biol Med 2024; 180:109000. [PMID: 39133952 DOI: 10.1016/j.compbiomed.2024.109000] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2024] [Revised: 08/01/2024] [Accepted: 08/05/2024] [Indexed: 08/29/2024]
Abstract
The fetus's health is evaluated with the biometric parameters obtained from the low-resolution ultrasound images. The accuracy of biometric parameters in existing protocols typically depends on conventional image processing approaches and hence, is prone to error. This study introduces the Attention Gate Double U-Net with Guided Decoder (ADU-GD) model specifically crafted for fetal biometric parameter prediction. The attention network and guided decoder are specifically designed to dynamically merge local features with their global dependencies, enhancing the precision of parameter estimation. The ADU-GD displays superior performance with Mean Absolute Error of 0.99 mm and segmentation accuracy of 99.1 % when benchmarked against the well-established models. The proposed model consistently achieved a high Dice index score of about 99.1 ± 0.8, with a minimal Hausdorff distance of about 1.01 ± 1.07 and a low Average Symmetric Surface Distance of about 0.25 ± 0.21, demonstrating the model's excellence. In a comprehensive evaluation, ADU-GD emerged as a frontrunner, outperforming existing deep-learning models such as Double U-Net, DeepLabv3, FCN-32s, PSPNet, SegNet, Trans U-Net, Swin U-Net, Mask-R2CNN, and RDHCformer models in terms of Mean Absolute Error for crucial fetal dimensions, including Head Circumference, Abdomen Circumference, Femur Length, and BiParietal Diameter. It achieved superior accuracy with MAE values of 2.2 mm, 2.6 mm, 0.6 mm, and 1.2 mm, respectively.
Collapse
Affiliation(s)
- Sajal Kumar Babu Degala
- Department of Applied Mechanics, Motilal Nehru National Institute of Technology Allahabad, Prayagraj, 211004, Uttar Pradesh, India
| | - Ravi Prakash Tewari
- Department of Applied Mechanics, Motilal Nehru National Institute of Technology Allahabad, Prayagraj, 211004, Uttar Pradesh, India
| | - Pankaj Kamra
- Kamra Ultrasound Centre and United Diagnostics, Prayagraj, 211002, Uttar Pradesh, India
| | - Uvanesh Kasiviswanathan
- Department of Applied Mechanics, Motilal Nehru National Institute of Technology Allahabad, Prayagraj, 211004, Uttar Pradesh, India.
| | - Ramesh Pandey
- Department of Applied Mechanics, Motilal Nehru National Institute of Technology Allahabad, Prayagraj, 211004, Uttar Pradesh, India
| |
Collapse
|
3
|
Cabezas M, Diez Y, Martinez-Diago C, Maroto A. A benchmark for 2D foetal brain ultrasound analysis. Sci Data 2024; 11:923. [PMID: 39181905 PMCID: PMC11344807 DOI: 10.1038/s41597-024-03774-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Accepted: 08/14/2024] [Indexed: 08/27/2024] Open
Abstract
Brain development involves a sequence of structural changes from early stages of the embryo until several months after birth. Currently, ultrasound is the established technique for screening due to its ability to acquire dynamic images in real-time without radiation and to its cost-efficiency. However, identifying abnormalities remains challenging due to the difficulty in interpreting foetal brain images. In this work we present a set of 104 2D foetal brain ultrasound images acquired during the 20th week of gestation that have been co-registered to a common space from a rough skull segmentation. The images are provided both on the original space and template space centred on the ellipses of all the subjects. Furthermore, the images have been annotated to highlight landmark points from structures of interest to analyse brain development. Both the final atlas template with probabilistic maps and the original images can be used to develop new segmentation techniques, test registration approaches for foetal brain ultrasound, extend our work to longitudinal datasets and to detect anomalies in new images.
Collapse
Affiliation(s)
- Mariano Cabezas
- Brain and Mind Centre, University of Sydney, Sydney, Australia.
| | - Yago Diez
- Faculty Of Science, Yamagata University, Yamagata, Japan
| | | | - Anna Maroto
- Hospital Universitari de Girona Doctor Josep Trueta, Girona, Spain
| |
Collapse
|
4
|
Chen Z, Lu Y, Long S, Campello VM, Bai J, Lekadir K. Fetal Head and Pubic Symphysis Segmentation in Intrapartum Ultrasound Image Using a Dual-Path Boundary-Guided Residual Network. IEEE J Biomed Health Inform 2024; 28:4648-4659. [PMID: 38739504 DOI: 10.1109/jbhi.2024.3399762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
Accurate segmentation of the fetal head and pubic symphysis in intrapartum ultrasound images and measurement of fetal angle of progression (AoP) are critical to both outcome prediction and complication prevention in delivery. However, due to poor quality of perinatal ultrasound imaging with blurred target boundaries and the relatively small target of the public symphysis, fully automated and accurate segmentation remains challenging. In this paper, we propse a dual-path boundary-guided residual network (DBRN), which is a novel approach to tackle these challenges. The model contains a multi-scale weighted module (MWM) to gather global context information, and enhance the feature response within the target region by weighting the feature map. The model also incorporates an enhanced boundary module (EBM) to obtain more precise boundary information. Furthermore, the model introduces a boundary-guided dual-attention residual module (BDRM) for residual learning. BDRM leverages boundary information as prior knowledge and employs spatial attention to simultaneously focus on background and foreground information, in order to capture concealed details and improve segmentation accuracy. Extensive comparative experiments have been conducted on three datasets. The proposed method achieves average Dice score of 0.908 ±0.05 and average Hausdorff distance of 3.396 ±0.66 mm. Compared with state-of-the-art competitors, the proposed DBRN achieves better results. In addition, the average difference between the automatic measurement of AoPs based on this model and the manual measurement results is 6.157 °, which has good consistency and has broad application prospects in clinical practice.
Collapse
|
5
|
Liu T, Han S, Xie L, Xing W, Liu C, Li B, Ta D. Super-resolution reconstruction of ultrasound image using a modified diffusion model. Phys Med Biol 2024; 69:125026. [PMID: 38636526 DOI: 10.1088/1361-6560/ad4086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Accepted: 04/18/2024] [Indexed: 04/20/2024]
Abstract
Objective. This study aims to perform super-resolution (SR) reconstruction of ultrasound images using a modified diffusion model, designated as the diffusion model for ultrasound image super-resolution (DMUISR). SR involves converting low-resolution images to high-resolution ones, and the proposed model is designed to enhance the suitability of diffusion models for this task in the context of ultrasound imaging.Approach. DMUISR incorporates a multi-layer self-attention (MLSA) mechanism and a wavelet-transform based low-resolution image (WTLR) encoder to enhance its suitability for ultrasound image SR tasks. The model takes interpolated and magnified images as input and outputs high-quality, detailed SR images. The study utilized 1,334 ultrasound images from the public fetal head-circumference dataset (HC18) for evaluation.Main results. Experiments were conducted at 2× , 4× , and 8× magnification factors. DMUISR outperformed mainstream ultrasound SR methods (Bicubic, VDSR, DECUSR, DRCN, REDNet, SRGAN) across all scales, providing high-quality images with clear structures and rich detailed textures in both hard and soft tissue regions. DMUISR successfully accomplished multiscale SR reconstruction while suppressing over-smoothing and mode collapse problems. Quantitative results showed that DMUISR achieved the best performance in terms of learned perceptual image patch similarity, with a significant decrease of over 50% at all three magnification factors (2× , 4× , and 8× ), as well as improvements in peak signal-to-noise ratio and structural similarity index measure. Ablation experiments validated the effectiveness of the MLSA mechanism and WTLR encoder in improving DMUISR's SR performance. Furthermore, by reducing the number of diffusion steps, the computational time of DMUISR was shortened to nearly one-tenth of its original while maintaining image quality without significant degradation.Significance. This study demonstrates that the modified diffusion model, DMUISR, provides superior performance for SR reconstruction of ultrasound images and has potential in improving imaging quality in the medical ultrasound field.
Collapse
Affiliation(s)
- Tianyu Liu
- Institute of Biomedical Engineering & Technology, Academy for Engineering and Technology, Fudan University, Shanghai 200433, People's Republic of China
| | - Shuai Han
- Institute of Biomedical Engineering & Technology, Academy for Engineering and Technology, Fudan University, Shanghai 200433, People's Republic of China
| | - Linru Xie
- Institute of Biomedical Engineering & Technology, Academy for Engineering and Technology, Fudan University, Shanghai 200433, People's Republic of China
| | - Wenyu Xing
- Institute of Biomedical Engineering & Technology, Academy for Engineering and Technology, Fudan University, Shanghai 200433, People's Republic of China
| | - Chengcheng Liu
- Institute of Biomedical Engineering & Technology, Academy for Engineering and Technology, Fudan University, Shanghai 200433, People's Republic of China
- State Key Laboratory of Integrated Chips and Systems, Fudan University, Shanghai 201203, People's Republic of China
| | - Boyi Li
- Institute of Biomedical Engineering & Technology, Academy for Engineering and Technology, Fudan University, Shanghai 200433, People's Republic of China
| | - Dean Ta
- State Key Laboratory of Integrated Chips and Systems, Fudan University, Shanghai 201203, People's Republic of China
- Department of Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai 200438, People's Republic of China
| |
Collapse
|
6
|
Bai J, Lu Y, Liu H, He F, Guo X. Editorial: New technologies improve maternal and newborn safety. FRONTIERS IN MEDICAL TECHNOLOGY 2024; 6:1372358. [PMID: 38872737 PMCID: PMC11169838 DOI: 10.3389/fmedt.2024.1372358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Accepted: 05/17/2024] [Indexed: 06/15/2024] Open
Affiliation(s)
- Jieyun Bai
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, China
- Auckland Bioengineering Institute, University of Auckland, Auckland, New Zealand
| | - Yaosheng Lu
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, China
| | - Huishu Liu
- Guangzhou Women and Children’s Medical Center, Guangzhou Medical University, Guangzhou, China
| | - Fang He
- Department of Obstetrics and Gynecology, Third Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Xiaohui Guo
- Department of Obstetrics, Shenzhen People’s Hospital, Shenzhen, China
| |
Collapse
|
7
|
Shi L, Di W, Liu J. Ultrasound image denoising autoencoder model based on lightweight attention mechanism. Quant Imaging Med Surg 2024; 14:3557-3571. [PMID: 38720841 PMCID: PMC11074761 DOI: 10.21037/qims-23-1654] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Accepted: 03/08/2024] [Indexed: 05/12/2024]
Abstract
Background The presence of noise in medical ultrasound images significantly degrades image quality and affects the accuracy of disease diagnosis. The convolutional neural network-denoising autoencoder (CNN-DAE) model extracts feature information by stacking regularly sized kernels. This results in the loss of texture detail, the over-smoothing of the image, and a lack of generalizability for speckle noise. Methods A lightweight attention denoise-convolutional neural network (LAD-CNN) is proposed in the present study. Two different lightweight attention blocks (i.e., the lightweight channel attention (LCA) block and the lightweight large-kernel attention (LLA) block are concatenated into the downsampling stage and the upsampling stage, respectively. A skip connection is included before the upsampling layer to alleviate the problem of gradient vanishing during backpropagation. The effectiveness of our model was evaluated using both subjective visual effects and objective evaluation metrics. Results With the highest peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) values at all noise levels, the proposed model outperformed the other models. In the test of brachial plexus ultrasound images, the average PSNR of our model was 0.15 higher at low noise levels and 0.33 higher at high noise levels than the suboptimal model. In the test of fetal ultrasound images, the average PSNR of our model was 0.23 higher at low noise levels and 0.20 higher at high noise levels than the suboptimal model. The statistical analysis showed that the p values were less than 0.05, which indicated a statistically significant difference between our model and the other models. Conclusions The results of this study suggest that the proposed LAD-CNN model is more efficient in denoising and preserving image details than both conventional denoising algorithms and existing deep-learning algorithms.
Collapse
Affiliation(s)
- Liuliu Shi
- School of Energy and Power Engineering, University of Shanghai for Science and Technology, Shanghai, China
- Key Laboratory of Power Machinery and Engineering of Ministry of Education, Shanghai Jiao Tong University, Shanghai, China
- Shanghai Key Laboratory of Multiphase Flow and Heat Transfer in Power Engineering, Shanghai, China
| | - Wentao Di
- School of Energy and Power Engineering, University of Shanghai for Science and Technology, Shanghai, China
- Shanghai Key Laboratory of Multiphase Flow and Heat Transfer in Power Engineering, Shanghai, China
| | - Jinlong Liu
- Institute of Pediatric Translational Medicine, Shanghai Children’s Medical Center, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
- Shanghai Engineering Research Center of Virtual Reality of Structural Heart Disease, Shanghai Children’s Medical Center, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
- Shanghai Institute for Pediatric Congenital Heart Disease, Shanghai Children’s Medical Center, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
8
|
Jeon YS, Yang H, Feng M. FCSN: Global Context Aware Segmentation by Learning the Fourier Coefficients of Objects in Medical Images. IEEE J Biomed Health Inform 2024; 28:1195-1206. [PMID: 36441878 DOI: 10.1109/jbhi.2022.3225205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
The encoder-decoder model is a commonly used Deep learning (DL) model for medical image segmentation. Encoder-decoder models make pixel-wise predictions that focus heavily on local patterns. As a result, the predicted mask often fails to preserve the object's shape and topology, which requires an understanding of the image's global context. In this work, we propose a Fourier Coefficient Segmentation Network (FCSN)---a novel global context-aware DL model that segments an object by learning the complex Fourier Coefficients of the object's masks. The Fourier coefficients are calculated by integrating over the mask's contour. Hence, FCSN is naturally motivated to incorporate a broader image context when estimating the coefficients. The global context awareness of FCSN helps produce more accurate segmentation and is more robust to local perturbations, such as additive noise or motion blur. We compare FCSN on other state-of-the-art global context-aware models (UNet++, DeepLabV3+, UNETR) on 5 medical image segmentation tasks (ISIC_2018, RIM_CUP, RIM_DISC, PROSTATE, FETAL). When compared with UNETR, FCSN attains significantly lower Hausdorff scores with 19.14 (6%), 17.42 (6%), 9.16 (14%), 11.18 (22%), and 5.98 (6%) for ISIC_2018, RIM_CUP, RIM_DISC, PROSTATE, and FETAL tasks respectively. Moreover, FCSN is lightweight by discarding the decoder module. FCSN only requires 29.7 M parameters which are 75.6 M and 9.9 M fewer than UNETR and DeepLabV3+, respectively. FCSN attains inference/training speeds of 1.6 ms/img and 6.3 ms/img, which is 8 and 3 faster than UNet and UNETR. Our work is available at https://github.com/nus-morninlab/FCSN.
Collapse
|
9
|
Dubey G, Srivastava S, Jayswal AK, Saraswat M, Singh P, Memoria M. Fetal Ultrasound Segmentation and Measurements Using Appearance and Shape Prior Based Density Regression with Deep CNN and Robust Ellipse Fitting. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:247-267. [PMID: 38343234 DOI: 10.1007/s10278-023-00908-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 09/05/2023] [Accepted: 09/06/2023] [Indexed: 03/02/2024]
Abstract
Accurately segmenting the structure of the fetal head (FH) and performing biometry measurements, including head circumference (HC) estimation, stands as a vital requirement for addressing abnormal fetal growth during pregnancy under the expertise of experienced radiologists using ultrasound (US) images. However, accurate segmentation and measurement is a challenging task due to image artifact, incomplete ellipse fitting, and fluctuations due to FH dimensions over different trimesters. Also, it is highly time-consuming due to the absence of specialized features, which leads to low segmentation accuracy. To address these challenging tasks, we propose an automatic density regression approach to incorporate appearance and shape priors into the deep learning-based network model (DR-ASPnet) with robust ellipse fitting using fetal US images. Initially, we employed multiple pre-processing steps to remove unwanted distortions, variable fluctuations, and a clear view of significant features from the US images. Then some form of augmentation operation is applied to increase the diversity of the dataset. Next, we proposed the hierarchical density regression deep convolutional neural network (HDR-DCNN) model, which involves three network models to determine the complex location of FH for accurate segmentation during the training and testing processes. Then, we used post-processing operations using contrast enhancement filtering with a morphological operation model to smooth the region and remove unnecessary artifacts from the segmentation results. After post-processing, we applied the smoothed segmented result to the robust ellipse fitting-based least square (REFLS) method for HC estimation. Experimental results of the DR-ASPnet model obtain 98.86% dice similarity coefficient (DSC) as segmentation accuracy, and it also obtains 1.67 mm absolute distance (AD) as measurement accuracy compared to other state-of-the-art methods. Finally, we achieved a 0.99 correlation coefficient (CC) in estimating the measured and predicted HC values on the HC18 dataset.
Collapse
Affiliation(s)
- Gaurav Dubey
- Department of Computer Science, KIET Group of Institutions, Delhi-NCR, Ghaziabad, U.P, India
| | | | | | - Mala Saraswat
- Department of Computer Science, Bennett University, Greater Noida, India
| | - Pooja Singh
- Shiv Nadar University, Greater Noida, Uttar Pradesh, India
| | - Minakshi Memoria
- CSE Department, UIT, Uttaranchal University, Dehradun, Uttarakhand, India
| |
Collapse
|
10
|
Enache IA, Iovoaica-Rămescu C, Ciobanu ȘG, Berbecaru EIA, Vochin A, Băluță ID, Istrate-Ofițeru AM, Comănescu CM, Nagy RD, Iliescu DG. Artificial Intelligence in Obstetric Anomaly Scan: Heart and Brain. Life (Basel) 2024; 14:166. [PMID: 38398675 PMCID: PMC10890185 DOI: 10.3390/life14020166] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Revised: 12/28/2023] [Accepted: 01/20/2024] [Indexed: 02/25/2024] Open
Abstract
BACKGROUND The ultrasound scan represents the first tool that obstetricians use in fetal evaluation, but sometimes, it can be limited by mobility or fetal position, excessive thickness of the maternal abdominal wall, or the presence of post-surgical scars on the maternal abdominal wall. Artificial intelligence (AI) has already been effectively used to measure biometric parameters, automatically recognize standard planes of fetal ultrasound evaluation, and for disease diagnosis, which helps conventional imaging methods. The usage of information, ultrasound scan images, and a machine learning program create an algorithm capable of assisting healthcare providers by reducing the workload, reducing the duration of the examination, and increasing the correct diagnosis capability. The recent remarkable expansion in the use of electronic medical records and diagnostic imaging coincides with the enormous success of machine learning algorithms in image identification tasks. OBJECTIVES We aim to review the most relevant studies based on deep learning in ultrasound anomaly scan evaluation of the most complex fetal systems (heart and brain), which enclose the most frequent anomalies.
Collapse
Affiliation(s)
- Iuliana-Alina Enache
- Doctoral School, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania; (I.-A.E.); (C.I.-R.); (E.I.A.B.)
- Department of Obstetrics and Gynecology, University Emergency County Hospital, 200642 Craiova, Romania; (A.V.); (I.D.B.); (A.M.I.-O.); (C.M.C.); (R.D.N.); (D.G.I.)
| | - Cătălina Iovoaica-Rămescu
- Doctoral School, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania; (I.-A.E.); (C.I.-R.); (E.I.A.B.)
- Department of Obstetrics and Gynecology, University Emergency County Hospital, 200642 Craiova, Romania; (A.V.); (I.D.B.); (A.M.I.-O.); (C.M.C.); (R.D.N.); (D.G.I.)
| | - Ștefan Gabriel Ciobanu
- Doctoral School, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania; (I.-A.E.); (C.I.-R.); (E.I.A.B.)
- Department of Obstetrics and Gynecology, University Emergency County Hospital, 200642 Craiova, Romania; (A.V.); (I.D.B.); (A.M.I.-O.); (C.M.C.); (R.D.N.); (D.G.I.)
| | - Elena Iuliana Anamaria Berbecaru
- Doctoral School, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania; (I.-A.E.); (C.I.-R.); (E.I.A.B.)
- Department of Obstetrics and Gynecology, University Emergency County Hospital, 200642 Craiova, Romania; (A.V.); (I.D.B.); (A.M.I.-O.); (C.M.C.); (R.D.N.); (D.G.I.)
| | - Andreea Vochin
- Department of Obstetrics and Gynecology, University Emergency County Hospital, 200642 Craiova, Romania; (A.V.); (I.D.B.); (A.M.I.-O.); (C.M.C.); (R.D.N.); (D.G.I.)
| | - Ionuț Daniel Băluță
- Department of Obstetrics and Gynecology, University Emergency County Hospital, 200642 Craiova, Romania; (A.V.); (I.D.B.); (A.M.I.-O.); (C.M.C.); (R.D.N.); (D.G.I.)
| | - Anca Maria Istrate-Ofițeru
- Department of Obstetrics and Gynecology, University Emergency County Hospital, 200642 Craiova, Romania; (A.V.); (I.D.B.); (A.M.I.-O.); (C.M.C.); (R.D.N.); (D.G.I.)
- Ginecho Clinic, Medgin SRL, 200333 Craiova, Romania
- Research Centre for Microscopic Morphology and Immunology, University of Medicine and Pharmacy of Craiova, 200642 Craiova, Romania
| | - Cristina Maria Comănescu
- Department of Obstetrics and Gynecology, University Emergency County Hospital, 200642 Craiova, Romania; (A.V.); (I.D.B.); (A.M.I.-O.); (C.M.C.); (R.D.N.); (D.G.I.)
- Ginecho Clinic, Medgin SRL, 200333 Craiova, Romania
- Department of Anatomy, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
| | - Rodica Daniela Nagy
- Department of Obstetrics and Gynecology, University Emergency County Hospital, 200642 Craiova, Romania; (A.V.); (I.D.B.); (A.M.I.-O.); (C.M.C.); (R.D.N.); (D.G.I.)
- Ginecho Clinic, Medgin SRL, 200333 Craiova, Romania
| | - Dominic Gabriel Iliescu
- Department of Obstetrics and Gynecology, University Emergency County Hospital, 200642 Craiova, Romania; (A.V.); (I.D.B.); (A.M.I.-O.); (C.M.C.); (R.D.N.); (D.G.I.)
- Ginecho Clinic, Medgin SRL, 200333 Craiova, Romania
- Department of Obstetrics and Gynecology, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
| |
Collapse
|
11
|
Alzubaidi M, Agus M, Makhlouf M, Anver F, Alyafei K, Househ M. Large-scale annotation dataset for fetal head biometry in ultrasound images. Data Brief 2023; 51:109708. [PMID: 38020431 PMCID: PMC10630602 DOI: 10.1016/j.dib.2023.109708] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 10/16/2023] [Accepted: 10/17/2023] [Indexed: 12/01/2023] Open
Abstract
This dataset features a collection of 3832 high-resolution ultrasound images, each with dimensions of 959×661 pixels, focused on Fetal heads. The images highlight specific anatomical regions: the brain, cavum septum pellucidum (CSP), and lateral ventricles (LV). The dataset was assembled under the Creative Commons Attribution 4.0 International license, using previously anonymized and de-identified images to maintain ethical standards. Each image is complemented by a CSV file detailing pixel size in millimeters (mm). For enhanced compatibility and usability, the dataset is available in 11 universally accepted formats, including Cityscapes, YOLO, CVAT, Datumaro, COCO, TFRecord, PASCAL, LabelMe, Segmentation mask, OpenImage, and ICDAR. This broad range of formats ensures adaptability for various computer vision tasks, such as classification, segmentation, and object detection. It is also compatible with multiple medical imaging software and deep learning frameworks. The reliability of the annotations is verified through a two-step validation process involving a Senior Attending Physician and a Radiologic Technologist. The Intraclass Correlation Coefficients (ICC) and Jaccard similarity indices (JS) are utilized to quantify inter-rater agreement. The dataset exhibits high annotation reliability, with ICC values averaging at 0.859 and 0.889, and JS values at 0.855 and 0.857 in two iterative rounds of annotation. This dataset is designed to be an invaluable resource for ongoing and future research projects in medical imaging and computer vision. It is particularly suited for applications in prenatal diagnostics, clinical diagnosis, and computer-assisted interventions. Its detailed annotations, broad compatibility, and ethical compliance make it a highly reusable and adaptable tool for the development of algorithms aimed at improving maternal and Fetal health.
Collapse
Affiliation(s)
- Mahmood Alzubaidi
- College of Science and Engineering, Hamad Bin Khalifa University, Doha 34110, Qatar
| | - Marco Agus
- College of Science and Engineering, Hamad Bin Khalifa University, Doha 34110, Qatar
| | | | - Fatima Anver
- College of Health Sciences, University of Doha for Science and Technology, Doha, 24449, Qatar
| | | | - Mowafa Househ
- College of Science and Engineering, Hamad Bin Khalifa University, Doha 34110, Qatar
| |
Collapse
|
12
|
Qiu C, Huang Z, Lin C, Zhang G, Ying S. A despeckling method for ultrasound images utilizing content-aware prior and attention-driven techniques. Comput Biol Med 2023; 166:107515. [PMID: 37839221 DOI: 10.1016/j.compbiomed.2023.107515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Revised: 08/25/2023] [Accepted: 09/19/2023] [Indexed: 10/17/2023]
Abstract
The despeckling of ultrasound images contributes to the enhancement of image quality and facilitates precise treatment of conditions such as tumor cancers. However, the use of existing methods for eliminating speckle noise can cause the loss of image texture features, impacting clinical judgment. Thus, maintaining clear lesion boundaries while eliminating speckle noise is a challenging task. This paper presents an innovative approach for denoising ultrasound images using a novel noise reduction network model called content-aware prior and attention-driven (CAPAD). The model employs a neural network to automatically capture the hidden prior features in ultrasound images to guide denoising and embeds the denoiser into the optimization module to simultaneously optimize parameters and noise. Moreover, this model incorporates a content-aware attention module and a loss function that preserves the structural characteristics of the image. These additions enhance the network's capacity to capture and retain valuable information. Extensive qualitative evaluation and quantitative analysis performed on a comprehensive dataset provide compelling evidence of the model's superior denoising capabilities. It excels in noise suppression while successfully preserving the underlying structures within the ultrasound images. Compared to other denoising algorithms, it demonstrates an improvement of approximately 5.88% in PSNR and approximately 3.61% in SSIM. Furthermore, using CAPAD as a preprocessing step for breast tumor segmentation in ultrasound images can greatly improve the accuracy of image segmentation. The experimental results indicate that the utilization of CAPAD leads to a notable enhancement of 10.43% in the AUPRC for breast cancer tumor segmentation.
Collapse
Affiliation(s)
- Chenghao Qiu
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610000, Sichuan, China.
| | - Zifan Huang
- School of Electronics and Information Engineering, Guangdong Ocean University, Zhanjiang, 524088, China.
| | - Cong Lin
- School of Electronics and Information Engineering, Guangdong Ocean University, Zhanjiang, 524088, China.
| | - Guodao Zhang
- Department of Digital Media Technology, Hangzhou Dianzi University, Hangzhou, 310018, China.
| | - Shenpeng Ying
- Department of Radiotherapy, Taizhou Central Hospital (Taizhou University Hospital), Taizhou, 318000, China.
| |
Collapse
|
13
|
Jost E, Kosian P, Jimenez Cruz J, Albarqouni S, Gembruch U, Strizek B, Recker F. Evolving the Era of 5D Ultrasound? A Systematic Literature Review on the Applications for Artificial Intelligence Ultrasound Imaging in Obstetrics and Gynecology. J Clin Med 2023; 12:6833. [PMID: 37959298 PMCID: PMC10649694 DOI: 10.3390/jcm12216833] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 10/17/2023] [Accepted: 10/25/2023] [Indexed: 11/15/2023] Open
Abstract
Artificial intelligence (AI) has gained prominence in medical imaging, particularly in obstetrics and gynecology (OB/GYN), where ultrasound (US) is the preferred method. It is considered cost effective and easily accessible but is time consuming and hindered by the need for specialized training. To overcome these limitations, AI models have been proposed for automated plane acquisition, anatomical measurements, and pathology detection. This study aims to overview recent literature on AI applications in OB/GYN US imaging, highlighting their benefits and limitations. For the methodology, a systematic literature search was performed in the PubMed and Cochrane Library databases. Matching abstracts were screened based on the PICOS (Participants, Intervention or Exposure, Comparison, Outcome, Study type) scheme. Articles with full text copies were distributed to the sections of OB/GYN and their research topics. As a result, this review includes 189 articles published from 1994 to 2023. Among these, 148 focus on obstetrics and 41 on gynecology. AI-assisted US applications span fetal biometry, echocardiography, or neurosonography, as well as the identification of adnexal and breast masses, and assessment of the endometrium and pelvic floor. To conclude, the applications for AI-assisted US in OB/GYN are abundant, especially in the subspecialty of obstetrics. However, while most studies focus on common application fields such as fetal biometry, this review outlines emerging and still experimental fields to promote further research.
Collapse
Affiliation(s)
- Elena Jost
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Philipp Kosian
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Jorge Jimenez Cruz
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Shadi Albarqouni
- Department of Diagnostic and Interventional Radiology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
- Helmholtz AI, Helmholtz Munich, Ingolstädter Landstraße 1, 85764 Neuherberg, Germany
| | - Ulrich Gembruch
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Brigitte Strizek
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Florian Recker
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| |
Collapse
|
14
|
Gao J, Lao Q, Liu P, Yi H, Kang Q, Jiang Z, Wu X, Li K, Chen Y, Zhang L. Anatomically Guided Cross-Domain Repair and Screening for Ultrasound Fetal Biometry. IEEE J Biomed Health Inform 2023; 27:4914-4925. [PMID: 37486830 DOI: 10.1109/jbhi.2023.3298096] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/26/2023]
Abstract
Ultrasound based estimation of fetal biometry is extensively used to diagnose prenatal abnormalities and to monitor fetal growth, for which accurate segmentation of the fetal anatomy is a crucial prerequisite. Although deep neural network-based models have achieved encouraging results on this task, inevitable distribution shifts in ultrasound images can still result in severe performance drop in real world deployment scenarios. In this article, we propose a complete ultrasound fetal examination system to deal with this troublesome problem by repairing and screening the anatomically implausible results. Our system consists of three main components: A routine segmentation network, a fetal anatomical key points guided repair network, and a shape-coding based selective screener. Guided by the anatomical key points, our repair network has stronger cross-domain repair capabilities, which can substantially improve the outputs of the segmentation network. By quantifying the distance between an arbitrary segmentation mask to its corresponding anatomical shape class, the proposed shape-coding based selective screener can then effectively reject the entire implausible results that cannot be fully repaired. Extensive experiments demonstrate that our proposed framework has strong anatomical guarantee and outperforms other methods in three different cross-domain scenarios.
Collapse
|
15
|
Naderi M, Karimi N, Emami A, Shirani S, Samavi S. Dynamic-Pix2Pix: Medical image segmentation by injecting noise to cGAN for modeling input and target domain joint distributions with limited training data. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/05/2023]
|
16
|
Day TG, Matthew J, Budd S, Hajnal JV, Simpson JM, Razavi R, Kainz B. Sonographer interaction with artificial intelligence: collaboration or conflict? ULTRASOUND IN OBSTETRICS & GYNECOLOGY : THE OFFICIAL JOURNAL OF THE INTERNATIONAL SOCIETY OF ULTRASOUND IN OBSTETRICS AND GYNECOLOGY 2023; 62:167-174. [PMID: 37523514 DOI: 10.1002/uog.26238] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Revised: 04/05/2023] [Accepted: 04/14/2023] [Indexed: 08/02/2023]
Affiliation(s)
- T G Day
- Department of Congenital Cardiology, Evelina London Children's Healthcare, Guy's and St Thomas' NHS Foundation Trust, London, UK
- Faculty of Life Sciences and Medicine, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - J Matthew
- Faculty of Life Sciences and Medicine, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - S Budd
- Faculty of Life Sciences and Medicine, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - J V Hajnal
- Faculty of Life Sciences and Medicine, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - J M Simpson
- Department of Congenital Cardiology, Evelina London Children's Healthcare, Guy's and St Thomas' NHS Foundation Trust, London, UK
- Faculty of Life Sciences and Medicine, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - R Razavi
- Department of Congenital Cardiology, Evelina London Children's Healthcare, Guy's and St Thomas' NHS Foundation Trust, London, UK
- Faculty of Life Sciences and Medicine, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - B Kainz
- Faculty of Life Sciences and Medicine, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
- Department of Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Department of Computing, Faculty of Engineering, Imperial College London, London, UK
| |
Collapse
|
17
|
Ciceri T, Squarcina L, Pigoni A, Ferro A, Montano F, Bertoldo A, Persico N, Boito S, Triulzi FM, Conte G, Brambilla P, Peruzzo D. Geometric Reliability of Super-Resolution Reconstructed Images from Clinical Fetal MRI in the Second Trimester. Neuroinformatics 2023; 21:549-563. [PMID: 37284977 PMCID: PMC10406722 DOI: 10.1007/s12021-023-09635-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/20/2023] [Indexed: 06/08/2023]
Abstract
Fetal Magnetic Resonance Imaging (MRI) is an important noninvasive diagnostic tool to characterize the central nervous system (CNS) development, significantly contributing to pregnancy management. In clinical practice, fetal MRI of the brain includes the acquisition of fast anatomical sequences over different planes on which several biometric measurements are manually extracted. Recently, modern toolkits use the acquired two-dimensional (2D) images to reconstruct a Super-Resolution (SR) isotropic volume of the brain, enabling three-dimensional (3D) analysis of the fetal CNS.We analyzed 17 fetal MR exams performed in the second trimester, including orthogonal T2-weighted (T2w) Turbo Spin Echo (TSE) and balanced Fast Field Echo (b-FFE) sequences. For each subject and type of sequence, three distinct high-resolution volumes were reconstructed via NiftyMIC, MIALSRTK, and SVRTK toolkits. Fifteen biometric measurements were assessed both on the acquired 2D images and SR reconstructed volumes, and compared using Passing-Bablok regression, Bland-Altman plot analysis, and statistical tests.Results indicate that NiftyMIC and MIALSRTK provide reliable SR reconstructed volumes, suitable for biometric assessments. NiftyMIC also improves the operator intraclass correlation coefficient on the quantitative biometric measures with respect to the acquired 2D images. In addition, TSE sequences lead to more robust fetal brain reconstructions against intensity artifacts compared to b-FFE sequences, despite the latter exhibiting more defined anatomical details.Our findings strengthen the adoption of automatic toolkits for fetal brain reconstructions to perform biometry evaluations of fetal brain development over common clinical MR at an early pregnancy stage.
Collapse
Affiliation(s)
- Tommaso Ciceri
- NeuroImaging Laboratory, Scientific Institute IRCCS Eugenio Medea, Bosisio Parini, Italy
- Department of Information Engineering, University of Padua, Padua, Italy
| | - Letizia Squarcina
- Department of Pathophysiology and Transplantation, University of Milan, Milan, Italy
| | - Alessandro Pigoni
- Social and Affective Neuroscience Group, IMT School for Advanced Studies Lucca, Lucca, Italy
- Department of Neurosciences and Mental Health, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico, Milan, Italy
| | - Adele Ferro
- Department of Neurosciences and Mental Health, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico, Milan, Italy
| | - Florian Montano
- NeuroImaging Laboratory, Scientific Institute IRCCS Eugenio Medea, Bosisio Parini, Italy
| | - Alessandra Bertoldo
- Department of Information Engineering, University of Padua, Padua, Italy
- Padova Neuroscience Center, University of Padua, Padua, Italy
| | - Nicola Persico
- Department of Woman, Child and Newborn, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico, Milan, Italy
| | - Simona Boito
- Department of Woman, Child and Newborn, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico, Milan, Italy
| | - Fabio Maria Triulzi
- Department of Pathophysiology and Transplantation, University of Milan, Milan, Italy
- Department of Services and Preventive Medicine, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico, Milan, Italy
| | - Giorgio Conte
- Department of Pathophysiology and Transplantation, University of Milan, Milan, Italy
- Department of Services and Preventive Medicine, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico, Milan, Italy
| | - Paolo Brambilla
- Department of Pathophysiology and Transplantation, University of Milan, Milan, Italy.
- Department of Neurosciences and Mental Health, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico, Milan, Italy.
| | - Denis Peruzzo
- NeuroImaging Laboratory, Scientific Institute IRCCS Eugenio Medea, Bosisio Parini, Italy
| |
Collapse
|
18
|
Bastiaansen WAP, Klein S, Koning AHJ, Niessen WJ, Steegers-Theunissen RPM, Rousian M. Computational methods for the analysis of early-pregnancy brain ultrasonography: a systematic review. EBioMedicine 2023; 89:104466. [PMID: 36796233 PMCID: PMC9958260 DOI: 10.1016/j.ebiom.2023.104466] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 01/09/2023] [Accepted: 01/23/2023] [Indexed: 02/16/2023] Open
Abstract
BACKGROUND Early screening of the brain is becoming routine clinical practice. Currently, this screening is performed by manual measurements and visual analysis, which is time-consuming and prone to errors. Computational methods may support this screening. Hence, the aim of this systematic review is to gain insight into future research directions needed to bring automated early-pregnancy ultrasound analysis of the human brain to clinical practice. METHODS We searched PubMed (Medline ALL Ovid), EMBASE, Web of Science Core Collection, Cochrane Central Register of Controlled Trials, and Google Scholar, from inception until June 2022. This study is registered in PROSPERO at CRD42020189888. Studies about computational methods for the analysis of human brain ultrasonography acquired before the 20th week of pregnancy were included. The key reported attributes were: level of automation, learning-based or not, the usage of clinical routine data depicting normal and abnormal brain development, public sharing of program source code and data, and analysis of the confounding factors. FINDINGS Our search identified 2575 studies, of which 55 were included. 76% used an automatic method, 62% a learning-based method, 45% used clinical routine data and in addition, for 13% the data depicted abnormal development. None of the studies shared publicly the program source code and only two studies shared the data. Finally, 35% did not analyse the influence of confounding factors. INTERPRETATION Our review showed an interest in automatic, learning-based methods. To bring these methods to clinical practice we recommend that studies: use routine clinical data depicting both normal and abnormal development, make their dataset and program source code publicly available, and be attentive to the influence of confounding factors. Introduction of automated computational methods for early-pregnancy brain ultrasonography will save valuable time during screening, and ultimately lead to better detection, treatment and prevention of neuro-developmental disorders. FUNDING The Erasmus MC Medical Research Advisor Committee (grant number: FB 379283).
Collapse
Affiliation(s)
- Wietske A P Bastiaansen
- Department of Obstetrics and Gynecology, Erasmus MC, University Medical Center, Rotterdam, the Netherlands; Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, University Medical Center, Rotterdam, the Netherlands
| | - Stefan Klein
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, University Medical Center, Rotterdam, the Netherlands
| | - Anton H J Koning
- Department of Pathology, Erasmus MC, University Medical Center, Rotterdam, the Netherlands
| | - Wiro J Niessen
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, University Medical Center, Rotterdam, the Netherlands
| | | | - Melek Rousian
- Department of Obstetrics and Gynecology, Erasmus MC, University Medical Center, Rotterdam, the Netherlands.
| |
Collapse
|
19
|
Al-Battal AF, Lerman IR, Nguyen TQ. Multi-path decoder U-Net: A weakly trained real-time segmentation network for object detection and localization in ultrasound scans. Comput Med Imaging Graph 2023; 107:102205. [PMID: 37030216 DOI: 10.1016/j.compmedimag.2023.102205] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Revised: 02/19/2023] [Accepted: 02/19/2023] [Indexed: 04/10/2023]
Abstract
Detecting and localizing an anatomical structure of interest within the field of view of an ultrasound scan is an essential step in many diagnostic and therapeutic procedures. However, ultrasound scans suffer from high levels of variabilities across sonographers and patients, making it challenging for sonographers to accurately identify and locate these structures without extensive experience. Segmentation-based convolutional neural networks (CNNs) have been proposed as a solution to assist sonographers in this task. Despite their accuracy, these networks require pixel-wise annotations for training; an expensive and labor-intensive operation that requires the expertise of an experienced practitioner to identify the precise outline of the structures of interest. This complicates, delays, and increases the cost of network training and deployment. To address this problem, we propose a multi-path decoder U-Net architecture that is trained on bounding box segmentation maps; not requiring pixel-wise annotations. We show that the network can be trained on small training sets, which is the case in medical imaging datasets; reducing the cost and time needed for deployment and use in clinical settings. The multi-path decoder design allows for better training of deeper layers and earlier attention to the target anatomical structures of interest. This architecture offers up to a 7% relative improvement compared to the U-Net architecture in localization and detection performance, with an increase of only 0.75% in the number of parameters. Its performance is on par with, or slightly better than, the more computationally expensive U-Net++, which has 20% more parameters; making the proposed architecture a more computationally efficient alternative for real-time object detection and localization in ultrasound scans.
Collapse
Affiliation(s)
- Abdullah F Al-Battal
- Electrical and Computer Engineering Department, University of California, San Diego, CA 92093, USA; Electrical Engineering Department, King Fahd University of Petroleum and Minerals, Dhahran, Saudi Arabia.
| | - Imanuel R Lerman
- Electrical and Computer Engineering Department, University of California, San Diego, CA 92093, USA; UC San Diego Health, University of California, San Diego, CA 92093, USA
| | - Truong Q Nguyen
- Electrical and Computer Engineering Department, University of California, San Diego, CA 92093, USA
| |
Collapse
|
20
|
Balagalla UB, Jayasooriya J, de Alwis C, Subasinghe A. Automated segmentation of standard scanning planes to measure biometric parameters in foetal ultrasound images – a survey. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2023. [DOI: 10.1080/21681163.2023.2179343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
Affiliation(s)
- U. B. Balagalla
- Department of Electrical and Electronic Engineering, University of Sri Jayewardenepura, Nugegoda, Sri Lanka
| | - J.V.D. Jayasooriya
- Department of Electrical and Electronic Engineering, University of Sri Jayewardenepura, Nugegoda, Sri Lanka
| | - C. de Alwis
- Department of Electrical and Electronic Engineering, University of Sri Jayewardenepura, Nugegoda, Sri Lanka
| | - A. Subasinghe
- Department of Electrical and Electronic Engineering, University of Sri Jayewardenepura, Nugegoda, Sri Lanka
| |
Collapse
|
21
|
Sarno L, Neola D, Carbone L, Saccone G, Carlea A, Miceli M, Iorio GG, Mappa I, Rizzo G, Girolamo RD, D'Antonio F, Guida M, Maruotti GM. Use of artificial intelligence in obstetrics: not quite ready for prime time. Am J Obstet Gynecol MFM 2023; 5:100792. [PMID: 36356939 DOI: 10.1016/j.ajogmf.2022.100792] [Citation(s) in RCA: 17] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Revised: 10/18/2022] [Accepted: 10/28/2022] [Indexed: 11/09/2022]
Abstract
Artificial intelligence is finding several applications in healthcare settings. This study aimed to report evidence on the effectiveness of artificial intelligence application in obstetrics. Through a narrative review of literature, we described artificial intelligence use in different obstetrical areas as follows: prenatal diagnosis, fetal heart monitoring, prediction and management of pregnancy-related complications (preeclampsia, preterm birth, gestational diabetes mellitus, and placenta accreta spectrum), and labor. Artificial intelligence seems to be a promising tool to help clinicians in daily clinical activity. The main advantages that emerged from this review are related to the reduction of inter- and intraoperator variability, time reduction of procedures, and improvement of overall diagnostic performance. However, nowadays, the diffusion of these systems in routine clinical practice raises several issues. Reported evidence is still very limited, and further studies are needed to confirm the clinical applicability of artificial intelligence. Moreover, better training of clinicians designed to use these systems should be ensured, and evidence-based guidelines regarding this topic should be produced to enhance the strengths of artificial systems and minimize their limits.
Collapse
Affiliation(s)
- Laura Sarno
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Daniele Neola
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida).
| | - Luigi Carbone
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Gabriele Saccone
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Annunziata Carlea
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Marco Miceli
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida); CEINGE Biotecnologie Avanzate, Naples, Italy (Dr Miceli)
| | - Giuseppe Gabriele Iorio
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Ilenia Mappa
- Division of Maternal Fetal Medicine, Department of Obstetrics and Gynecology, University of Rome Tor Vergata, Rome, Italy (Dr Mappa and Dr Rizzo)
| | - Giuseppe Rizzo
- Division of Maternal Fetal Medicine, Department of Obstetrics and Gynecology, University of Rome Tor Vergata, Rome, Italy (Dr Mappa and Dr Rizzo)
| | - Raffaella Di Girolamo
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Francesco D'Antonio
- Center for Fetal Care and High Risk Pregnancy, Department of Obstetrics and Gynecology, University G. D'Annunzio of Chieti-Pescara, Chieti, Italy (Dr D'Antonio)
| | - Maurizio Guida
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Giuseppe Maria Maruotti
- Gynecology and Obstetrics Unit, Department of Public Health, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Maruotti)
| |
Collapse
|
22
|
The segmentation effect of style transfer on fetal head ultrasound image: a study of multi-source data. Med Biol Eng Comput 2023; 61:1017-1031. [PMID: 36645647 DOI: 10.1007/s11517-022-02747-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2022] [Accepted: 12/22/2022] [Indexed: 01/17/2023]
Abstract
The generalization ability of the fetal head segmentation method is reduced due to the data obtained by different machines, settings, and operations. To keep the generalization ability, we proposed a Fourier domain adaptation (FDA) method based on amplitude and phase to achieve better multi-source ultrasound data segmentation performance. Given the source/target image, the Fourier domain information was first obtained using fast Fourier transform. Secondly, the target information was mapped to the source Fourier domain through the phase adjustment parameter α and the amplitude adjustment parameter β. Thirdly, the target image and the preprocessed source image obtained through the inverse discrete Fourier transform were used as the input of the segmentation network. Finally, the dice loss was computed to adjust α and β. In the existing transform methods, the proposed method achieved the best performance. The adaptive-FDA method provides a solution for the automatic preprocessing of multi-source data. Experimental results show that it quantitatively improves the segmentation results and model generalization performance.
Collapse
|
23
|
Caspi Y, de Zwarte SMC, Iemenschot IJ, Lumbreras R, de Heus R, Bekker MN, Hulshoff Pol H. Automatic measurements of fetal intracranial volume from 3D ultrasound scans. FRONTIERS IN NEUROIMAGING 2022; 1:996702. [PMID: 37555155 PMCID: PMC10406279 DOI: 10.3389/fnimg.2022.996702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Accepted: 09/15/2022] [Indexed: 08/10/2023]
Abstract
Three-dimensional fetal ultrasound is commonly used to study the volumetric development of brain structures. To date, only a limited number of automatic procedures for delineating the intracranial volume exist. Hence, intracranial volume measurements from three-dimensional ultrasound images are predominantly performed manually. Here, we present and validate an automated tool to extract the intracranial volume from three-dimensional fetal ultrasound scans. The procedure is based on the registration of a brain model to a subject brain. The intracranial volume of the subject is measured by applying the inverse of the final transformation to an intracranial mask of the brain model. The automatic measurements showed a high correlation with manual delineation of the same subjects at two gestational ages, namely, around 20 and 30 weeks (linear fitting R2(20 weeks) = 0.88, R2(30 weeks) = 0.77; Intraclass Correlation Coefficients: 20 weeks=0.94, 30 weeks = 0.84). Overall, the automatic intracranial volumes were larger than the manually delineated ones (84 ± 16 vs. 76 ± 15 cm3; and 274 ± 35 vs. 237 ± 28 cm3), probably due to differences in cerebellum delineation. Notably, the automated measurements reproduced both the non-linear pattern of fetal brain growth and the increased inter-subject variability for older fetuses. By contrast, there was some disagreement between the manual and automatic delineation concerning the size of sexual dimorphism differences. The method presented here provides a relatively efficient way to delineate volumes of fetal brain structures like the intracranial volume automatically. It can be used as a research tool to investigate these structures in large cohorts, which will ultimately aid in understanding fetal structural human brain development.
Collapse
Affiliation(s)
- Yaron Caspi
- Department of Psychiatry, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, Netherlands
| | - Sonja M. C. de Zwarte
- Department of Psychiatry, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, Netherlands
| | - Iris J. Iemenschot
- Department of Psychiatry, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, Netherlands
| | - Raquel Lumbreras
- Department of Psychiatry, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, Netherlands
| | - Roel de Heus
- Department of Obstetrics and Gynaecology, St. Antonius Hospital, Utrecht, Netherlands
- Department of Obstetrics, University Medical Center Utrecht, Utrecht, Netherlands
| | - Mireille N. Bekker
- Department of Obstetrics, University Medical Center Utrecht, Utrecht, Netherlands
| | - Hilleke Hulshoff Pol
- Department of Psychiatry, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, Netherlands
- Department of Psychology, Utrecht University, Utrecht, Netherlands
| |
Collapse
|
24
|
van Knippenberg L, van Sloun RJG, Mischi M, de Ruijter J, Lopata R, Bouwman RA. Unsupervised domain adaptation method for segmenting cross-sectional CCA images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 225:107037. [PMID: 35907375 DOI: 10.1016/j.cmpb.2022.107037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Revised: 07/20/2022] [Accepted: 07/21/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVES Automatic vessel segmentation in ultrasound is challenging due to the quality of the ultrasound images, which is affected by attenuation, high level of speckle noise and acoustic shadowing. Recently, deep convolutional neural networks are increasing in popularity due to their great performance on image segmentation problems, including vessel segmentation. Traditionally, large labeled datasets are required to train a network that achieves high performance, and is able to generalize well to different orientations, transducers and ultrasound scanners. However, these large datasets are rare, given that it is challenging and time-consuming to acquire and manually annotate in-vivo data. METHODS In this work, we present a model-based, unsupervised domain adaptation method that consists of two stages. In the first stage, the network is trained on simulated ultrasound images, which have an accurate ground truth. In the second stage, the network continues training on in-vivo data in an unsupervised way, therefore not requiring the data to be labelled. Rather than using an adversarial neural network, prior knowledge on the elliptical shape of the segmentation mask is used to detect unexpected outputs. RESULTS The segmentation performance was quantified using manually segmented images as ground truth. Due to the proposed domain adaptation method, the median Dice similarity coefficient increased from 0 to 0.951, outperforming a domain adversarial neural network (median Dice 0.922) and a state-of-the-art Star-Kalman algorithm that was specifically designed for this dataset (median Dice 0.942). CONCLUSIONS The results show that it is feasible to first train a neural network on simulated data, and then apply model-based domain adaptation to further improve segmentation performance by training on unlabeled in-vivo data. This overcomes the limitation of conventional deep learning approaches to require large amounts of manually labeled in-vivo data. Since the proposed domain adaptation method only requires prior knowledge on the shape of the segmentation mask, performance can be explored in various domains and applications in future research.
Collapse
Affiliation(s)
- Luuk van Knippenberg
- Department of Electrical Engineering, Eindhoven University of Technology, the Netherlands.
| | - Ruud J G van Sloun
- Department of Electrical Engineering, Eindhoven University of Technology, the Netherlands; Department of Anesthesiology, Catharina Hospital Eindhoven, the Netherlands
| | - Massimo Mischi
- Department of Electrical Engineering, Eindhoven University of Technology, the Netherlands; Department of Anesthesiology, Catharina Hospital Eindhoven, the Netherlands
| | - Joerik de Ruijter
- Department of Biomedical Engineering, Eindhoven University of Technology, the Netherlands
| | - Richard Lopata
- Department of Biomedical Engineering, Eindhoven University of Technology, the Netherlands; Department of Anesthesiology, Catharina Hospital Eindhoven, the Netherlands
| | - R Arthur Bouwman
- Department of Anesthesiology, Catharina Hospital Eindhoven, the Netherlands
| |
Collapse
|
25
|
Alzubaidi M, Agus M, Shah U, Makhlouf M, Alyafei K, Househ M. Ensemble Transfer Learning for Fetal Head Analysis: From Segmentation to Gestational Age and Weight Prediction. Diagnostics (Basel) 2022; 12:diagnostics12092229. [PMID: 36140628 PMCID: PMC9497941 DOI: 10.3390/diagnostics12092229] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Revised: 08/25/2022] [Accepted: 08/26/2022] [Indexed: 11/16/2022] Open
Abstract
Ultrasound is one of the most commonly used imaging methodologies in obstetrics to monitor the growth of a fetus during the gestation period. Specifically, ultrasound images are routinely utilized to gather fetal information, including body measurements, anatomy structure, fetal movements, and pregnancy complications. Recent developments in artificial intelligence and computer vision provide new methods for the automated analysis of medical images in many domains, including ultrasound images. We present a full end-to-end framework for segmenting, measuring, and estimating fetal gestational age and weight based on two-dimensional ultrasound images of the fetal head. Our segmentation framework is based on the following components: (i) eight segmentation architectures (UNet, UNet Plus, Attention UNet, UNet 3+, TransUNet, FPN, LinkNet, and Deeplabv3) were fine-tuned using lightweight network EffientNetB0, and (ii) a weighted voting method for building an optimized ensemble transfer learning model (ETLM). On top of that, ETLM was used to segment the fetal head and to perform analytic and accurate measurements of circumference and seven other values of the fetal head, which we incorporated into a multiple regression model for predicting the week of gestational age and the estimated fetal weight (EFW). We finally validated the regression model by comparing our result with expert physician and longitudinal references. We evaluated the performance of our framework on the public domain dataset HC18: we obtained 98.53% mean intersection over union (mIoU) as the segmentation accuracy, overcoming the state-of-the-art methods; as measurement accuracy, we obtained a 1.87 mm mean absolute difference (MAD). Finally we obtained a 0.03% mean square error (MSE) in predicting the week of gestational age and 0.05% MSE in predicting EFW.
Collapse
Affiliation(s)
- Mahmood Alzubaidi
- College of Science and Engineering, Hamad Bin Khalifa University, Doha P.O. Box 34110, Qatar
- Correspondence: (M.A.); (M.H.)
| | - Marco Agus
- College of Science and Engineering, Hamad Bin Khalifa University, Doha P.O. Box 34110, Qatar
| | - Uzair Shah
- College of Science and Engineering, Hamad Bin Khalifa University, Doha P.O. Box 34110, Qatar
| | - Michel Makhlouf
- Sidra Medical and Research Center, Sidra Medicine, Doha P.O. Box 26999, Qatar
| | - Khalid Alyafei
- Sidra Medical and Research Center, Sidra Medicine, Doha P.O. Box 26999, Qatar
| | - Mowafa Househ
- College of Science and Engineering, Hamad Bin Khalifa University, Doha P.O. Box 34110, Qatar
- Correspondence: (M.A.); (M.H.)
| |
Collapse
|
26
|
Avena-Zampieri CL, Hutter J, Rutherford M, Milan A, Hall M, Egloff A, Lloyd DFA, Nanda S, Greenough A, Story L. Assessment of the fetal lungs in utero. Am J Obstet Gynecol MFM 2022; 4:100693. [PMID: 35858660 PMCID: PMC9811184 DOI: 10.1016/j.ajogmf.2022.100693] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Revised: 07/12/2022] [Accepted: 07/12/2022] [Indexed: 01/07/2023]
Abstract
Antenatal diagnosis of abnormal pulmonary development has improved significantly over recent years because of progress in imaging techniques. Two-dimensional ultrasound is the mainstay of investigation of pulmonary pathology during pregnancy, providing good prognostication in conditions such as congenital diaphragmatic hernia; however, it is less validated in other high-risk groups such as those with congenital pulmonary airway malformation or preterm premature rupture of membranes. Three-dimensional assessment of lung volume and size is now possible using ultrasound or magnetic resonance imaging; however, the use of these techniques is still limited because of unpredictable fetal motion, and such tools have also been inadequately validated in high-risk populations other than those with congenital diaphragmatic hernia. The advent of advanced, functional magnetic resonance imaging techniques such as diffusion and T2* imaging, and the development of postprocessing pipelines that facilitate motion correction, have enabled not only more accurate evaluation of pulmonary size, but also assessment of tissue microstructure and perfusion. In the future, fetal magnetic resonance imaging may have an increasing role in the prognostication of pulmonary abnormalities and in monitoring current and future antenatal therapies to enhance lung development. This review aims to examine the current imaging methods available for assessment of antenatal lung development and to outline possible future directions.
Collapse
Affiliation(s)
- Carla L Avena-Zampieri
- Department of Women and Children's Health, King's College London, London, United Kingdom; Centre for the Developing Brain, King's College London, London, United Kingdom
| | - Jana Hutter
- Centre for the Developing Brain, King's College London, London, United Kingdom
| | - Mary Rutherford
- Centre for the Developing Brain, King's College London, London, United Kingdom
| | - Anna Milan
- Neonatal Unit, Guy's and St Thomas' National Health Service Foundation Trust, London, United Kingdom
| | - Megan Hall
- Department of Women and Children's Health, King's College London, London, United Kingdom; Centre for the Developing Brain, King's College London, London, United Kingdom
| | - Alexia Egloff
- Centre for the Developing Brain, King's College London, London, United Kingdom
| | - David F A Lloyd
- Centre for the Developing Brain, King's College London, London, United Kingdom
| | - Surabhi Nanda
- Fetal Medicine Unit, Guy's and St Thomas' National Health Service Foundation Trust, London, United Kingdom
| | - Anne Greenough
- Department of Women and Children's Health, King's College London, London, United Kingdom; Neonatal Unit, King's College Hospital, London, United Kingdom; Asthma UK Centre in Allergic Mechanisms of Asthma, King's College London, London, United Kingdom; National Institute for Health and Care Research Biomedical Research Centre, Guy's & St Thomas National Health Service Foundation Trust and King's College London, London, United Kingdom
| | - Lisa Story
- Department of Women and Children's Health, King's College London, London, United Kingdom; Centre for the Developing Brain, King's College London, London, United Kingdom; Fetal Medicine Unit, Guy's and St Thomas' National Health Service Foundation Trust, London, United Kingdom.
| |
Collapse
|
27
|
Alzubaidi M, Agus M, Alyafei K, Althelaya KA, Shah U, Abd-Alrazaq AA, Anbar M, Makhlouf M, Househ M. Towards deep observation: A systematic survey on artificial intelligence techniques to monitor fetus via Ultrasound Images. iScience 2022; 25:104713. [PMID: 35856024 PMCID: PMC9287600 DOI: 10.1016/j.isci.2022.104713] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 06/09/2022] [Accepted: 06/28/2022] [Indexed: 11/26/2022] Open
Abstract
Several reviews have been conducted regarding artificial intelligence (AI) techniques to improve pregnancy outcomes. But they are not focusing on ultrasound images. This survey aims to explore how AI can assist with fetal growth monitoring via ultrasound image. We reported our findings using the guidelines for PRISMA. We conducted a comprehensive search of eight bibliographic databases. Out of 1269 studies 107 are included. We found that 2D ultrasound images were more popular (88) than 3D and 4D ultrasound images (19). Classification is the most used method (42), followed by segmentation (31), classification integrated with segmentation (16) and other miscellaneous methods such as object-detection, regression, and reinforcement learning (18). The most common areas that gained traction within the pregnancy domain were the fetus head (43), fetus body (31), fetus heart (13), fetus abdomen (10), and the fetus face (10). This survey will promote the development of improved AI models for fetal clinical applications. Artificial intelligence studies to monitor fetal development via ultrasound images Fetal issues categorized based on four categories — general, head, heart, face, abdomen The most used AI techniques are classification, segmentation, object detection, and RL The research and practical implications are included.
Collapse
|
28
|
Abstract
In the field of medical imaging, the division of an image into meaningful structures using image segmentation is an essential step for pre-processing analysis. Many studies have been carried out to solve the general problem of the evaluation of image segmentation results. One of the main focuses in the computer vision field is based on artificial intelligence algorithms for segmentation and classification, including machine learning and deep learning approaches. The main drawback of supervised segmentation approaches is that a large dataset of ground truth validated by medical experts is required. In this sense, many research groups have developed their segmentation approaches according to their specific needs. However, a generalised application aimed at visualizing, assessing and comparing the results of different methods facilitating the generation of a ground-truth repository is not found in recent literature. In this paper, a new graphical user interface application (MedicalSeg) for the management of medical imaging based on pre-processing and segmentation is presented. The objective is twofold, first to create a test platform for comparing segmentation approaches, and secondly to generate segmented images to create ground truths that can then be used for future purposes as artificial intelligence tools. An experimental demonstration and performance analysis discussion are presented in this paper.
Collapse
|
29
|
Zeng W, Luo J, Cheng J, Lu Y. Efficient fetal ultrasound image segmentation for automatic head circumference measurement using a lightweight deep convolutional neural network. Med Phys 2022; 49:5081-5092. [PMID: 35536111 DOI: 10.1002/mp.15700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Revised: 03/20/2022] [Accepted: 04/24/2022] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Fetal head circumference (HC) is an important biometric parameter that can be used to assess fetal development in obstetric clinical practice. Most of existing methods use deep neural network to accomplish the task of automatic fetal HC measurement from two-dimensional ultrasound images, and some of them achieved relatively high prediction accuracy. However, few of these methods focused on optimizing model efficiency performance. Our purpose is to develop a more efficient approach for this task, which could help doctors measure HC faster and would be more suitable for deployment on devices with scarce computing resources. METHODS In this paper, we present a very lightweight deep convolutional neural network to achieve automatic fetal head segmentation from ultrasound images. By using sequential prediction network architecture, the proposed model could perform much faster inference while maintaining a high prediction accuracy. In addition, we used depthwise separable convolution to replace part of the standard convolution in the network and shrunk the input image to further improve model efficiency. After getting fetal head segmentation results, post-processing, including morphological processing and least-squares ellipse fitting, was applied to obtain the fetal HC. All experiments in this work were performed on a public dataset, HC18, with 999 fetal ultrasound images for training and 335 for testing. The dataset is publicly available on https://hc18.grand-challenge.org/ and the code for our method is also publicly available on https://github.com/ApeMocker/CSM-for-fetal-HC-measurement. RESULTS Our model has only 0.13 million [M] parameters and can achieve an inference speed of 28[ms] per frame on a CPU and 0.194 [ms] per frame on a GPU, which far exceeds all existing deep learning-based models as far as we know. Experimental results showed that the method achieved a mean absolute difference of 1.97 (±1.89) [mm] and a Dice similarity coefficient of 97.61(±1.72) [%] on HC18 test set, which were comparable to the state-of-the-art. CONCLUSION We presented a very lightweight deep learning-based model to realize fast and accurate fetal head segmentation from two-dimensional ultrasound image, which is then used for calculating the fetal HC. The proposed method could help obstetricians measure the fetal head circumference more efficiently with high accuracy, and has the potential to be applied to the situations where computing resources are relatively scarce. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Wen Zeng
- School of Biomedical Engineering, Shenzhen campus of Sun Yat-sen University, Shenzhen, Guangdong, 518107, China
| | - Jie Luo
- School of Biomedical Engineering, Shenzhen campus of Sun Yat-sen University, Shenzhen, Guangdong, 518107, China.,Key Laboratory of Sensing Technology and Biomedical Instrument of Guangdong Province, Guangdong Provincial Engineering and Technology Center of Advanced and Portable Medical Devices, Sun Yat-sen University, Guangzhou, China
| | - Jiaru Cheng
- School of Biomedical Engineering, Shenzhen campus of Sun Yat-sen University, Shenzhen, Guangdong, 518107, China
| | - Yiling Lu
- School of Biomedical Engineering, Shenzhen campus of Sun Yat-sen University, Shenzhen, Guangdong, 518107, China
| |
Collapse
|
30
|
Fetal ultrasound image segmentation using dilated multi-scale-LinkNet. Int J Health Sci (Qassim) 2022. [DOI: 10.53730/ijhs.v6ns1.6047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Ultrasound imaging is routinely conducted for prenatal care in many countries to determine the health of the fetus, the pregnancy's progress, as well as the baby's due date. The intrinsic property of fetal images during different stages of pregnancy creates difficulty in automatic extraction of fetal head from ultrasound image data. The proposed work develops a deep learning model called Dilated Multi-scale-LinkNet for segmenting fetal skulls automatically from two dimensional ultrasound image data. The network is modeled to work with Link-Net since it offers better interpretation in biomedicine applications. Convolutional layers with dilations are added following the encoders. The Dilated convolution is used to expand the size of an image to prevent data loss. Training and evaluating the model is done using the HC18 grand challenge dataset. It contains 2D ultrasound images at different pregnancy stages. The results of experiments performed on an ultrasound images of women in different pregnancy stages. It reveals that we achieved 94.82% Dice score, 1.9 mm ADF, 0.72 DF and 2.02 HD when segmenting the fetal skull. Employing Dilated Multi-Scale-LinkNet improves the accuracy as well as all the evaluation parameters of the segmentation compared with the existing methods.
Collapse
|
31
|
Yang C, Liao S, Yang Z, Guo J, Zhang Z, Yang Y, Guo Y, Yin S, Liu C, Kang Y. RDHCformer: Fusing ResDCN and Transformers for Fetal Head Circumference Automatic Measurement in 2D Ultrasound Images. Front Med (Lausanne) 2022; 9:848904. [PMID: 35425784 PMCID: PMC9002127 DOI: 10.3389/fmed.2022.848904] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Accepted: 03/07/2022] [Indexed: 11/13/2022] Open
Abstract
Fetal head circumference (HC) is an important biological parameter to monitor the healthy development of the fetus. Since there are some HC measurement errors that affected by the skill and experience of the sonographers, a rapid, accurate and automatic measurement for fetal HC in prenatal ultrasound is of great significance. We proposed a new one-stage network for rotating elliptic object detection based on anchor-free method, which is also an end-to-end network for fetal HC auto-measurement that no need for any post-processing. The network structure used simple transformer structure combined with convolutional neural network (CNN) for a lightweight design, meanwhile, made full use of powerful global feature extraction ability of transformer and local feature extraction ability of CNN to extract continuous and complete skull edge information. The two complement each other for promoting detection precision of fetal HC without significantly increasing the amount of computation. In order to reduce the large variation of intersection over union (IOU) in rotating elliptic object detection caused by slight angle deviation, we used soft stage-wise regression (SSR) strategy for angle regression and added KLD that is approximate to IOU loss into total loss function. The proposed method achieved good results on the HC18 dataset to prove its effectiveness. This study is expected to help less experienced sonographers, provide help for precision medicine, and relieve the shortage of sonographers for prenatal ultrasound in worldwide.
Collapse
Affiliation(s)
- Chaoran Yang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.,Medical Device Innovation Center, Shenzhen Technology University, Shenzhen, China
| | - Shanshan Liao
- Department of Obstetrics, Shengjing Hospital of China Medical University, Shenyang, China
| | - Zeyu Yang
- Department of Ultrasound, Shengjing Hospital of China Medical University, Shenyang, China
| | - Jiaqi Guo
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Zhichao Zhang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Yingjian Yang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.,Medical Device Innovation Center, Shenzhen Technology University, Shenzhen, China
| | - Yingwei Guo
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.,Medical Device Innovation Center, Shenzhen Technology University, Shenzhen, China
| | - Shaowei Yin
- Department of Obstetrics, Shengjing Hospital of China Medical University, Shenyang, China
| | - Caixia Liu
- Department of Obstetrics, Shengjing Hospital of China Medical University, Shenyang, China
| | - Yan Kang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.,Medical Device Innovation Center, Shenzhen Technology University, Shenzhen, China.,Engineering Research Centre of Medical Imaging and Intelligent Analysis, Ministry of Education, Shenyang, China
| |
Collapse
|
32
|
An Image Processing Protocol to Extract Variables Predictive of Human Embryo Fitness for Assisted Reproduction. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12073531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Despite the use of new techniques on embryo selection and the presence of equipment on the market, such as EmbryoScope® and Geri®, which help in the evaluation of embryo quality, there is still a subjectivity between the embryologist’s classifications, which are subjected to inter- and intra-observer variability, therefore compromising the successful implantation of the embryo. Nonetheless, with the acquisition of images through the time-lapse system, it is possible to perform digital processing of these images, providing a better analysis of the embryo, in addition to enabling the automatic analysis of a large volume of information. An image processing protocol was developed using well-established techniques to segment the image of blastocysts and extract variables of interest. A total of 33 variables were automatically generated by digital image processing, each one representing a different aspect of the embryo and describing a different characteristic of the blastocyst. These variables can be categorized into texture, gray-level average, gray-level standard deviation, modal value, relations, and light level. The automated and directed steps of the proposed processing protocol exclude spurious results, except when image quality (e.g., focus) prevents correct segmentation. The image processing protocol can segment human blastocyst images and automatically extract 33 variables that describe quantitative aspects of the blastocyst’s regions, with potential utility in embryo selection for assisted reproductive technology (ART).
Collapse
|
33
|
Arroyo J, Marini TJ, Saavedra AC, Toscano M, Baran TM, Drennan K, Dozier A, Zhao YT, Egoavil M, Tamayo L, Ramos B, Castaneda B. No sonographer, no radiologist: New system for automatic prenatal detection of fetal biometry, fetal presentation, and placental location. PLoS One 2022; 17:e0262107. [PMID: 35139093 PMCID: PMC8827457 DOI: 10.1371/journal.pone.0262107] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Accepted: 12/17/2021] [Indexed: 02/06/2023] Open
Abstract
Ultrasound imaging is a vital component of high-quality Obstetric care. In rural and under-resourced communities, the scarcity of ultrasound imaging results in a considerable gap in the healthcare of pregnant mothers. To increase access to ultrasound in these communities, we developed a new automated diagnostic framework operated without an experienced sonographer or interpreting provider for assessment of fetal biometric measurements, fetal presentation, and placental position. This approach involves the use of a standardized volume sweep imaging (VSI) protocol based solely on external body landmarks to obtain imaging without an experienced sonographer and application of a deep learning algorithm (U-Net) for diagnostic assessment without a radiologist. Obstetric VSI ultrasound examinations were performed in Peru by an ultrasound operator with no previous ultrasound experience who underwent 8 hours of training on a standard protocol. The U-Net was trained to automatically segment the fetal head and placental location from the VSI ultrasound acquisitions to subsequently evaluate fetal biometry, fetal presentation, and placental position. In comparison to diagnostic interpretation of VSI acquisitions by a specialist, the U-Net model showed 100% agreement for fetal presentation (Cohen's κ 1 (p<0.0001)) and 76.7% agreement for placental location (Cohen's κ 0.59 (p<0.0001)). This corresponded to 100% sensitivity and specificity for fetal presentation and 87.5% sensitivity and 85.7% specificity for anterior placental location. The method also achieved a low relative error of 5.6% for biparietal diameter and 7.9% for head circumference. Biometry measurements corresponded to estimated gestational age within 2 weeks of those assigned by standard of care examination with up to 89% accuracy. This system could be deployed in rural and underserved areas to provide vital information about a pregnancy without a trained sonographer or interpreting provider. The resulting increased access to ultrasound imaging and diagnosis could improve disparities in healthcare delivery in under-resourced areas.
Collapse
Affiliation(s)
- Junior Arroyo
- Laboratorio de Imágenes Médicas, Departamento de Ingeniería, Pontificia Universidad Católica del Perú, Lima, Peru
| | - Thomas J. Marini
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, New York, United States of America
| | - Ana C. Saavedra
- Laboratorio de Imágenes Médicas, Departamento de Ingeniería, Pontificia Universidad Católica del Perú, Lima, Peru
| | - Marika Toscano
- Department of Obstetrics and Gynecology, University of Rochester Medical Center, Rochester, New York, United States of America
| | - Timothy M. Baran
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, New York, United States of America
| | - Kathryn Drennan
- Department of Obstetrics and Gynecology, University of Rochester Medical Center, Rochester, New York, United States of America
| | - Ann Dozier
- Department of Public Health, University of Rochester Medical Center, Rochester, New York, United States of America
| | - Yu Tina Zhao
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, New York, United States of America
| | - Miguel Egoavil
- Research & Development, Medical Innovation & Technology, Lima, Perú
| | - Lorena Tamayo
- Research & Development, Medical Innovation & Technology, Lima, Perú
| | - Berta Ramos
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, New York, United States of America
| | - Benjamin Castaneda
- Laboratorio de Imágenes Médicas, Departamento de Ingeniería, Pontificia Universidad Católica del Perú, Lima, Peru
| |
Collapse
|
34
|
Zhang J, Petitjean C, Ainouz S. Segmentation-Based vs. Regression-Based Biomarker Estimation: A Case Study of Fetus Head Circumference Assessment from Ultrasound Images. J Imaging 2022; 8:jimaging8020023. [PMID: 35200726 PMCID: PMC8877769 DOI: 10.3390/jimaging8020023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Revised: 01/07/2022] [Accepted: 01/19/2022] [Indexed: 11/16/2022] Open
Abstract
The fetus head circumference (HC) is a key biometric to monitor fetus growth during pregnancy, which is estimated from ultrasound (US) images. The standard approach to automatically measure the HC is to use a segmentation network to segment the skull, and then estimate the head contour length from the segmentation map via ellipse fitting, usually after post-processing. In this application, segmentation is just an intermediate step to the estimation of a parameter of interest. Another possibility is to estimate directly the HC with a regression network. Even if this type of segmentation-free approaches have been boosted with deep learning, it is not yet clear how well direct approach can compare to segmentation approaches, which are expected to be still more accurate. This observation motivates the present study, where we propose a fair, quantitative comparison of segmentation-based and segmentation-free (i.e., regression) approaches to estimate how far regression-based approaches stand from segmentation approaches. We experiment various convolutional neural networks (CNN) architectures and backbones for both segmentation and regression models and provide estimation results on the HC18 dataset, as well agreement analysis, to support our findings. We also investigate memory usage and computational efficiency to compare both types of approaches. The experimental results demonstrate that even if segmentation-based approaches deliver the most accurate results, regression CNN approaches are actually learning to find prominent features, leading to promising yet improvable HC estimation results.
Collapse
|
35
|
Liu H, Liu J, Chen F, Shan C. Progressive Residual Learning with Memory Upgrade for Ultrasound Image Blind Super-resolution. IEEE J Biomed Health Inform 2022; 26:4390-4401. [PMID: 35041614 DOI: 10.1109/jbhi.2022.3142076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
For clinical medical diagnosis and treatment, image super-resolution (SR) technology will be helpful to improve the ultrasonic imaging quality so as to enhance the accuracy of disease diagnosis. However, due to the differences of sensing devices or transmission media, the resolution degradation process of ultrasound imaging in real scenes is uncontrollable, especially when the blur kernel is usually unknown. This issue makes current endto- end SR networks poor performance when applied to ultrasonic images. Aiming to achieve effective SR in real ultrasound medical scenes, in this work, we propose a blind deep SR method based on progressive residual learning and memory upgrade. Specifically, we estimate the accurate blur kernel from the spatial attention map block of low resolution (LR) ultrasound image through a multi-label classification network, then we construct three modules - up-sampling (US) module, residual learning (RL) model and memory upgrading (MU) model for ultrasound image blind SR. The US module is designed to upscale the input information and the up-sampled residual result will be used for SR reconstruction. The RL module is employed to approximate the original LR and continuously generate the updated residual and feed it to the next US module. The last MU module can store all progressively learned residuals, which offers increased interactions between the US and RL modules, augmenting the details recovery. Extensive experiments and evaluations on the benchmark CCA-US and US-CASE datasets demonstrate the proposed approach achieves better performance against the state-ofthe- art methods.
Collapse
|
36
|
Ashkani Chenarlogh V, Ghelich Oghli M, Shabanzadeh A, Sirjani N, Akhavan A, Shiri I, Arabi H, Sanei Taheri M, Tarzamni MK. Fast and Accurate U-Net Model for Fetal Ultrasound Image Segmentation. ULTRASONIC IMAGING 2022; 44:25-38. [PMID: 34986724 DOI: 10.1177/01617346211069882] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
U-Net based algorithms, due to their complex computations, include limitations when they are used in clinical devices. In this paper, we addressed this problem through a novel U-Net based architecture that called fast and accurate U-Net for medical image segmentation task. The proposed fast and accurate U-Net model contains four tuned 2D-convolutional, 2D-transposed convolutional, and batch normalization layers as its main layers. There are four blocks in the encoder-decoder path. The results of our proposed architecture were evaluated using a prepared dataset for head circumference and abdominal circumference segmentation tasks, and a public dataset (HC18-Grand challenge dataset) for fetal head circumference measurement. The proposed fast network significantly improved the processing time in comparison with U-Net, dilated U-Net, R2U-Net, attention U-Net, and MFP U-Net. It took 0.47 seconds for segmenting a fetal abdominal image. In addition, over the prepared dataset using the proposed accurate model, Dice and Jaccard coefficients were 97.62% and 95.43% for fetal head segmentation, 95.07%, and 91.99% for fetal abdominal segmentation. Moreover, we have obtained the Dice and Jaccard coefficients of 97.45% and 95.00% using the public HC18-Grand challenge dataset. Based on the obtained results, we have concluded that a fine-tuned and a simple well-structured model used in clinical devices can outperform complex models.
Collapse
Affiliation(s)
| | - Mostafa Ghelich Oghli
- Research and Development Department, Med Fanavarn Plus Co., Karaj, Iran
- Department of Cardiovascular Sciences, KU Leuven, Leuven, Belgium
| | - Ali Shabanzadeh
- Research and Development Department, Med Fanavarn Plus Co., Karaj, Iran
| | - Nasim Sirjani
- Research and Development Department, Med Fanavarn Plus Co., Karaj, Iran
| | - Ardavan Akhavan
- Research and Development Department, Med Fanavarn Plus Co., Karaj, Iran
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Morteza Sanei Taheri
- Department of Radiology, Shohada-e-Tajrish Hospital, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mohammad Kazem Tarzamni
- Department of Radiology, Imam Reza Hospital, Tabriz University of Medical Sciences, Tabriz, Iran
| |
Collapse
|
37
|
A deep-learning framework for metacarpal-head cartilage-thickness estimation in ultrasound rheumatological images. Comput Biol Med 2021; 141:105117. [PMID: 34968861 DOI: 10.1016/j.compbiomed.2021.105117] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Revised: 11/30/2021] [Accepted: 12/02/2021] [Indexed: 12/18/2022]
Abstract
OBJECTIVE Rheumatoid arthritis (RA) is a chronic disease characterized by erosive symmetrical polyarthritis. Bone and cartilage are the main joint targets of this disease. Cartilage damage is one of the most relevant determinants of physical disability in RA patients. Cartilage damage is nowadays assessed by clinicians, which manually measure cartilage thickness in ultrasound (US) imaging. This poses issues relevant to intra-and inter-observer variability. Relying on the acquisition of metacarpal-head US images from 38 subjects, this work addresses the problem of automatic cartilage-thickness measurement by designing a new deep-learning (DL) framework. METHODS The framework consists of a Convolutional Neural Network (CNN), responsible for regressing cartilage-interface distance fields, followed by a post-processing step to delineate the two cartilage interfaces from the predicted distance fields and compute the cartilage thickness. RESULTS Our framework achieved encouraging results with a mean absolute difference (ADF) of 0.032 (±0.026) mm against manual thickness annotation by an expert clinician. When evaluating the intra-observer variability, we obtained an ADF = 0.036 (±0.028) mm. CONCLUSION The proposed framework achieved an ADF against manual annotation that was comparable to the intra-observer variability, proving the potential of DL in the field. SIGNIFICANCE This work is the first to address the problem of automatic cartilage-thickness estimation in US rheumatological images with DL, paving the way for future research in the field. From a clinical perspective, the proposed framework proved to be a valuable tool to support the clinical routine increasing the reproducibility of cartilage thickness measurements.
Collapse
|
38
|
Matthew J, Skelton E, Day TG, Zimmer VA, Gomez A, Wheeler G, Toussaint N, Liu T, Budd S, Lloyd K, Wright R, Deng S, Ghavami N, Sinclair M, Meng Q, Kainz B, Schnabel JA, Rueckert D, Razavi R, Simpson J, Hajnal J. Exploring a new paradigm for the fetal anomaly ultrasound scan: Artificial intelligence in real time. Prenat Diagn 2021; 42:49-59. [PMID: 34648206 DOI: 10.1002/pd.6059] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Revised: 09/20/2021] [Accepted: 10/07/2021] [Indexed: 12/12/2022]
Abstract
OBJECTIVE Advances in artificial intelligence (AI) have demonstrated potential to improve medical diagnosis. We piloted the end-to-end automation of the mid-trimester screening ultrasound scan using AI-enabled tools. METHODS A prospective method comparison study was conducted. Participants had both standard and AI-assisted US scans performed. The AI tools automated image acquisition, biometric measurement, and report production. A feedback survey captured the sonographers' perceptions of scanning. RESULTS Twenty-three subjects were studied. The average time saving per scan was 7.62 min (34.7%) with the AI-assisted method (p < 0.0001). There was no difference in reporting time. There were no clinically significant differences in biometric measurements between the two methods. The AI tools saved a satisfactory view in 93% of the cases (four core views only), and 73% for the full 13 views, compared to 98% for both using the manual scan. Survey responses suggest that the AI tools helped sonographers to concentrate on image interpretation by removing disruptive tasks. CONCLUSION Separating freehand scanning from image capture and measurement resulted in a faster scan and altered workflow. Removing repetitive tasks may allow more attention to be directed identifying fetal malformation. Further work is required to improve the image plane detection algorithm for use in real time.
Collapse
Affiliation(s)
- Jacqueline Matthew
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK.,Guy's and St Thomas' NHS Foundation Trust, London, UK
| | - Emily Skelton
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK.,Guy's and St Thomas' NHS Foundation Trust, London, UK.,School of Health Sciences, City University of London, London, UK
| | - Thomas G Day
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK.,Guy's and St Thomas' NHS Foundation Trust, London, UK
| | - Veronika A Zimmer
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Alberto Gomez
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Gavin Wheeler
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Nicolas Toussaint
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Tianrui Liu
- Department of Computing, Imperial College London, London, UK
| | - Samuel Budd
- Department of Computing, Imperial College London, London, UK
| | - Karen Lloyd
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Robert Wright
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Shujie Deng
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Nooshin Ghavami
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Matthew Sinclair
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Qingjie Meng
- Department of Computing, Imperial College London, London, UK
| | - Bernhard Kainz
- Department of Computing, Imperial College London, London, UK
| | - Julia A Schnabel
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Daniel Rueckert
- Department of Computing, Imperial College London, London, UK.,School of Informatics and Medicine, Klinikum Rechts der Isar, Technical University of Munich, Munich, Germany
| | - Reza Razavi
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK.,Guy's and St Thomas' NHS Foundation Trust, London, UK
| | - John Simpson
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK.,Guy's and St Thomas' NHS Foundation Trust, London, UK
| | - Jo Hajnal
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| |
Collapse
|
39
|
Moccia S, Fiorentino MC, Frontoni E. Mask-R[Formula: see text]CNN: a distance-field regression version of Mask-RCNN for fetal-head delineation in ultrasound images. Int J Comput Assist Radiol Surg 2021; 16:1711-1718. [PMID: 34156608 PMCID: PMC8580944 DOI: 10.1007/s11548-021-02430-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 06/04/2021] [Indexed: 11/03/2022]
Abstract
BACKGROUND AND OBJECTIVES Fetal head-circumference (HC) measurement from ultrasound (US) images provides useful hints for assessing fetal growth. Such measurement is performed manually during the actual clinical practice, posing issues relevant to intra- and inter-clinician variability. This work presents a fully automatic, deep-learning-based approach to HC delineation, which we named Mask-R[Formula: see text]CNN. It advances our previous work in the field and performs HC distance-field regression in an end-to-end fashion, without requiring a priori HC localization nor any postprocessing for outlier removal. METHODS Mask-R[Formula: see text]CNN follows the Mask-RCNN architecture, with a backbone inspired by feature-pyramid networks, a region-proposal network and the ROI align. The Mask-RCNN segmentation head is here modified to regress the HC distance field. RESULTS Mask-R[Formula: see text]CNN was tested on the HC18 Challenge dataset, which consists of 999 training and 335 testing images. With a comprehensive ablation study, we showed that Mask-R[Formula: see text]CNN achieved a mean absolute difference of 1.95 mm (standard deviation [Formula: see text] mm), outperforming other approaches in the literature. CONCLUSIONS With this work, we proposed an end-to-end model for HC distance-field regression. With our experimental results, we showed that Mask-R[Formula: see text]CNN may be an effective support for clinicians for assessing fetal growth.
Collapse
Affiliation(s)
- Sara Moccia
- The BioRobotics Institute, Scuola Superiore Sant'Anna, Pisa, Italy.
- Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, Pisa, Italy.
| | | | - Emanuele Frontoni
- Department of Information Engineering, Università Politecnica delle Marche, Ancona, Italy
| |
Collapse
|
40
|
Wang L, Guo D, Wang G, Zhang S. Annotation-Efficient Learning for Medical Image Segmentation Based on Noisy Pseudo Labels and Adversarial Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2795-2807. [PMID: 33370237 DOI: 10.1109/tmi.2020.3047807] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Despite that deep learning has achieved state-of-the-art performance for medical image segmentation, its success relies on a large set of manually annotated images for training that are expensive to acquire. In this paper, we propose an annotation-efficient learning framework for segmentation tasks that avoids annotations of training images, where we use an improved Cycle-Consistent Generative Adversarial Network (GAN) to learn from a set of unpaired medical images and auxiliary masks obtained either from a shape model or public datasets. We first use the GAN to generate pseudo labels for our training images under the implicit high-level shape constraint represented by a Variational Auto-encoder (VAE)-based discriminator with the help of the auxiliary masks, and build a Discriminator-guided Generator Channel Calibration (DGCC) module which employs our discriminator's feedback to calibrate the generator for better pseudo labels. To learn from the pseudo labels that are noisy, we further introduce a noise-robust iterative learning method using noise-weighted Dice loss. We validated our framework with two situations: objects with a simple shape model like optic disc in fundus images and fetal head in ultrasound images, and complex structures like lung in X-Ray images and liver in CT images. Experimental results demonstrated that 1) Our VAE-based discriminator and DGCC module help to obtain high-quality pseudo labels. 2) Our proposed noise-robust learning method can effectively overcome the effect of noisy pseudo labels. 3) The segmentation performance of our method without using annotations of training images is close or even comparable to that of learning from human annotations.
Collapse
|
41
|
Chen Z, Liu Z, Du M, Wang Z. Artificial Intelligence in Obstetric Ultrasound: An Update and Future Applications. Front Med (Lausanne) 2021; 8:733468. [PMID: 34513890 PMCID: PMC8429607 DOI: 10.3389/fmed.2021.733468] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 08/04/2021] [Indexed: 01/04/2023] Open
Abstract
Artificial intelligence (AI) can support clinical decisions and provide quality assurance for images. Although ultrasonography is commonly used in the field of obstetrics and gynecology, the use of AI is still in a stage of infancy. Nevertheless, in repetitive ultrasound examinations, such as those involving automatic positioning and identification of fetal structures, prediction of gestational age (GA), and real-time image quality assurance, AI has great potential. To realize its application, it is necessary to promote interdisciplinary communication between AI developers and sonographers. In this review, we outlined the benefits of AI technology in obstetric ultrasound diagnosis by optimizing image acquisition, quantification, segmentation, and location identification, which can be helpful for obstetric ultrasound diagnosis in different periods of pregnancy.
Collapse
Affiliation(s)
- Zhiyi Chen
- The First Affiliated Hospital, Medical Imaging Centre, Hengyang Medical School, University of South China, Hengyang, China.,Institute of Medical Imaging, University of South China, Hengyang, China
| | - Zhenyu Liu
- The First Affiliated Hospital, Medical Imaging Centre, Hengyang Medical School, University of South China, Hengyang, China
| | - Meng Du
- Institute of Medical Imaging, University of South China, Hengyang, China
| | - Ziyao Wang
- The First Affiliated Hospital, Medical Imaging Centre, Hengyang Medical School, University of South China, Hengyang, China
| |
Collapse
|
42
|
Liu P, Zhao H, Li P. Automated Segmentation of Fetal Ultrasound Images Using Feature Attention Supervised Network. Ultrasound Q 2021; 37:278-286. [PMID: 34478428 DOI: 10.1097/ruq.0000000000000532] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
ABSTRACT Segmentation of anatomical structures from ultrasound images requires the expertise of an experienced clinician, but developing a machine automated segmentation process is complicated because of the existence of characteristic artifacts. In this article, we present a novel end-to-end network that enables automated measurements of the fetal head circumference (HC) and fetal abdomen circumference (AC) to be made from 2-dimensional (2D) ultrasound images during each pregnancy trimester. These measurements are necessary, because the HC and AC are used to predict gestational age and to monitor fetal growth. Automated HC and AC assessments are valuable for providing independent and objective results and are particularly useful for application in developing countries where trained sonographers are in short supply. We propose a scale attention expanding network that builds a feature pyramid inside the network, and the intermediate result of each scale is then concatenated to the feature with a fusion scheme for the next layer. Furthermore, a scale attention module is proposed for selecting the most useful scale and for reducing scale noise. To optimize the network, a deep supervision method based on boundary attention is employed. Results of experiments show that the scale attention expanding network obtained an absolute difference, Hausdorff distance, and dice similarity coefficient of 1.81 ± 1.69%, 1.22 ± 0.77%, and 97.94%, respectively, which were top results in the HC18 data set, and respective results on the abdomen set were 2.23 ± 2.38%, 0.42 ± 0.56%, and 98.04%. The experiments conducted demonstrate that our method provides a superior performance to existing fetal ultrasound segmentation methods.
Collapse
|
43
|
Xu L, Gao S, Shi L, Wei B, Liu X, Zhang J, He Y. Exploiting Vector Attention and Context Prior for Ultrasound Image Segmentation. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.05.033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
44
|
Delay U, Nawarathne T, Dissanayake S, Gunarathne S, Withanage T, Godaliyadda R, Rathnayake C, Ekanayake P, Wijayakulasooriya J. Novel non-invasive in-house fabricated wearable system with a hybrid algorithm for fetal movement recognition. PLoS One 2021; 16:e0254560. [PMID: 34255780 PMCID: PMC8277045 DOI: 10.1371/journal.pone.0254560] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2021] [Accepted: 06/29/2021] [Indexed: 11/18/2022] Open
Abstract
Fetal movement count monitoring is one of the most commonly used methods of assessing fetal well-being. While few methods are available to monitor fetal movements, they consist of several adverse qualities such as unreliability as well as the inability to be conducted in a non-clinical setting. Therefore, this research was conducted to design a complete system that will enable pregnant mothers to monitor fetal movement at home. This system consists of a non-invasive, non-transmitting sensor unit that can be fabricated at a low cost. An accelerometer was utilized as the primary sensor and a micro-controller based circuit was implemented. Clinical testing was conducted utilizing this sensor unit. Two phases of clinical testing procedures were done and during the first phase readings from 120 mothers were taken while during the second phase readings from 15 mothers were taken. Validation was done by conducting an abdominal ultrasound scan which was utilized as the ground truth during the second phase of the clinical testing procedure. A clinical survey was also conducted in parallel with clinical testings in order to improve the sensor unit as well as to improve the final system. Four different signal processing algorithms were implemented on the data set and the performance of each was compared with each other. Out of the four algorithms three algorithms were able to obtain a true positive rate around 85%. However, the best algorithm was selected on the basis of minimizing the false positive rate. Consequently, the most feasible as well as the best performing algorithm was determined and it was utilized in the final system. This algorithm have a true positive rate of 86% and a false positive rate of 7% Furthermore, a mobile application was also developed to be used with the sensor unit by pregnant mothers. Finally, a complete end to end method to monitor fetal movement in a non-clinical setting was presented by the proposed system.
Collapse
Affiliation(s)
- Upekha Delay
- Department of Electrical and Electronic Engineering, Faculty of Engineering, University of Peradeniya, Peradeniya, Sri Lanka
| | - Thoshara Nawarathne
- Department of Electrical and Electronic Engineering, Faculty of Engineering, University of Peradeniya, Peradeniya, Sri Lanka
| | - Sajan Dissanayake
- Department of Electrical and Electronic Engineering, Faculty of Engineering, University of Peradeniya, Peradeniya, Sri Lanka
| | - Samitha Gunarathne
- Department of Electrical and Electronic Engineering, Faculty of Engineering, University of Peradeniya, Peradeniya, Sri Lanka
| | - Thanushi Withanage
- Department of Electrical and Electronic Engineering, Faculty of Engineering, University of Peradeniya, Peradeniya, Sri Lanka
| | - Roshan Godaliyadda
- Department of Electrical and Electronic Engineering, Faculty of Engineering, University of Peradeniya, Peradeniya, Sri Lanka
| | - Chathura Rathnayake
- Department of Obstetrics and Gynacology, Faculty of Medicine, University of Peradeniya, Peradeniya, Sri Lanka
| | - Parakrama Ekanayake
- Department of Electrical and Electronic Engineering, Faculty of Engineering, University of Peradeniya, Peradeniya, Sri Lanka
| | - Janaka Wijayakulasooriya
- Department of Electrical and Electronic Engineering, Faculty of Engineering, University of Peradeniya, Peradeniya, Sri Lanka
| |
Collapse
|
45
|
Ahmedt-Aristizabal D, Armin MA, Denman S, Fookes C, Petersson L. Graph-Based Deep Learning for Medical Diagnosis and Analysis: Past, Present and Future. SENSORS (BASEL, SWITZERLAND) 2021; 21:4758. [PMID: 34300498 PMCID: PMC8309939 DOI: 10.3390/s21144758] [Citation(s) in RCA: 44] [Impact Index Per Article: 14.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Revised: 07/05/2021] [Accepted: 07/07/2021] [Indexed: 01/17/2023]
Abstract
With the advances of data-driven machine learning research, a wide variety of prediction problems have been tackled. It has become critical to explore how machine learning and specifically deep learning methods can be exploited to analyse healthcare data. A major limitation of existing methods has been the focus on grid-like data; however, the structure of physiological recordings are often irregular and unordered, which makes it difficult to conceptualise them as a matrix. As such, graph neural networks have attracted significant attention by exploiting implicit information that resides in a biological system, with interacting nodes connected by edges whose weights can be determined by either temporal associations or anatomical junctions. In this survey, we thoroughly review the different types of graph architectures and their applications in healthcare. We provide an overview of these methods in a systematic manner, organized by their domain of application including functional connectivity, anatomical structure, and electrical-based analysis. We also outline the limitations of existing techniques and discuss potential directions for future research.
Collapse
Affiliation(s)
- David Ahmedt-Aristizabal
- Imaging and Computer Vision Group, CSIRO Data61, Canberra 2601, Australia; (M.A.A.); (L.P.)
- Signal Processing, Artificial Intelligence and Vision Technologies (SAIVT) Research Program, Queensland University of Technology, Brisbane 4000, Australia; (S.D.); (C.F.)
| | - Mohammad Ali Armin
- Imaging and Computer Vision Group, CSIRO Data61, Canberra 2601, Australia; (M.A.A.); (L.P.)
| | - Simon Denman
- Signal Processing, Artificial Intelligence and Vision Technologies (SAIVT) Research Program, Queensland University of Technology, Brisbane 4000, Australia; (S.D.); (C.F.)
| | - Clinton Fookes
- Signal Processing, Artificial Intelligence and Vision Technologies (SAIVT) Research Program, Queensland University of Technology, Brisbane 4000, Australia; (S.D.); (C.F.)
| | - Lars Petersson
- Imaging and Computer Vision Group, CSIRO Data61, Canberra 2601, Australia; (M.A.A.); (L.P.)
| |
Collapse
|
46
|
Ghelich Oghli M, Shabanzadeh A, Moradi S, Sirjani N, Gerami R, Ghaderi P, Sanei Taheri M, Shiri I, Arabi H, Zaidi H. Automatic fetal biometry prediction using a novel deep convolutional network architecture. Phys Med 2021; 88:127-137. [PMID: 34242884 DOI: 10.1016/j.ejmp.2021.06.020] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/06/2021] [Revised: 06/23/2021] [Accepted: 06/27/2021] [Indexed: 12/18/2022] Open
Abstract
PURPOSE Fetal biometric measurements face a number of challenges, including the presence of speckle, limited soft-tissue contrast and difficulties in the presence of low amniotic fluid. This work proposes a convolutional neural network for automatic segmentation and measurement of fetal biometric parameters, including biparietal diameter (BPD), head circumference (HC), abdominal circumference (AC), and femur length (FL) from ultrasound images that relies on the attention gates incorporated into the multi-feature pyramid Unet (MFP-Unet) network. METHODS The proposed approach, referred to as Attention MFP-Unet, learns to extract/detect salient regions automatically to be treated as the object of interest via the attention gates. After determining the type of anatomical structure in the image using a convolutional neural network, Niblack's thresholding technique was applied as pre-processing algorithm for head and abdomen identification, whereas a novel algorithm was used for femur extraction. A publicly-available dataset (HC18 grand-challenge) and clinical data of 1334 subjects were utilized for training and evaluation of the Attention MFP-Unet algorithm. RESULTS Dice similarity coefficient (DSC), hausdorff distance (HD), percentage of good contours, the conformity coefficient, and average perpendicular distance (APD) were employed for quantitative evaluation of fetal anatomy segmentation. In addition, correlation analysis, good contours, and conformity were employed to evaluate the accuracy of the biometry predictions. Attention MFP-Unet achieved 0.98, 1.14 mm, 100%, 0.95, and 0.2 mm for DSC, HD, good contours, conformity, and APD, respectively. CONCLUSIONS Quantitative evaluation demonstrated the superior performance of the Attention MFP-Unet compared to state-of-the-art approaches commonly employed for automatic measurement of fetal biometric parameters.
Collapse
Affiliation(s)
- Mostafa Ghelich Oghli
- Research and Development Department, Med Fanavarn Plus Co., Karaj, Iran; Department of Cardiovascular Sciences, KU Leuven, Leuven, Belgium.
| | - Ali Shabanzadeh
- Research and Development Department, Med Fanavarn Plus Co., Karaj, Iran.
| | - Shakiba Moradi
- Research and Development Department, Med Fanavarn Plus Co., Karaj, Iran
| | - Nasim Sirjani
- Research and Development Department, Med Fanavarn Plus Co., Karaj, Iran
| | - Reza Gerami
- Radiation Sciences Research Center (RSRC), Aja University of Medical Sciences, Tehran, Iran
| | - Payam Ghaderi
- Research and Development Department, Med Fanavarn Plus Co., Karaj, Iran
| | - Morteza Sanei Taheri
- R Department of Radiology, Shohada-e-Tajrish Hospital, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland; Geneva University Neurocenter, Geneva University, CH-1205 Geneva, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
47
|
Yang X, Dou H, Huang R, Xue W, Huang Y, Qian J, Zhang Y, Luo H, Guo H, Wang T, Xiong Y, Ni D. Agent With Warm Start and Adaptive Dynamic Termination for Plane Localization in 3D Ultrasound. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1950-1961. [PMID: 33784618 DOI: 10.1109/tmi.2021.3069663] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Accurate standard plane (SP) localization is the fundamental step for prenatal ultrasound (US) diagnosis. Typically, dozens of US SPs are collected to determine the clinical diagnosis. 2D US has to perform scanning for each SP, which is time-consuming and operator-dependent. While 3D US containing multiple SPs in one shot has the inherent advantages of less user-dependency and more efficiency. Automatically locating SP in 3D US is very challenging due to the huge search space and large fetal posture variations. Our previous study proposed a deep reinforcement learning (RL) framework with an alignment module and active termination to localize SPs in 3D US automatically. However, termination of agent search in RL is important and affects the practical deployment. In this study, we enhance our previous RL framework with a newly designed adaptive dynamic termination to enable an early stop for the agent searching, saving at most 67% inference time, thus boosting the accuracy and efficiency of the RL framework at the same time. Besides, we validate the effectiveness and generalizability of our algorithm extensively on our in-house multi-organ datasets containing 433 fetal brain volumes, 519 fetal abdomen volumes, and 683 uterus volumes. Our approach achieves localization error of 2.52mm/10.26° , 2.48mm/10.39° , 2.02mm/10.48° , 2.00mm/14.57° , 2.61mm/9.71° , 3.09mm/9.58° , 1.49mm/7.54° for the transcerebellar, transventricular, transthalamic planes in fetal brain, abdominal plane in fetal abdomen, and mid-sagittal, transverse and coronal planes in uterus, respectively. Experimental results show that our method is general and has the potential to improve the efficiency and standardization of US scanning.
Collapse
|
48
|
Automatic linear measurements of the fetal brain on MRI with deep neural networks. Int J Comput Assist Radiol Surg 2021; 16:1481-1492. [PMID: 34185253 DOI: 10.1007/s11548-021-02436-8] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Accepted: 06/17/2021] [Indexed: 12/17/2022]
Abstract
PURPOSE Timely, accurate and reliable assessment of fetal brain development is essential to reduce short and long-term risks to fetus and mother. Fetal MRI is increasingly used for fetal brain assessment. Three key biometric linear measurements important for fetal brain evaluation are cerebral biparietal diameter (CBD), bone biparietal diameter (BBD), and trans-cerebellum diameter (TCD), obtained manually by expert radiologists on reference slices, which is time consuming and prone to human error. The aim of this study was to develop a fully automatic method computing the CBD, BBD and TCD measurements from fetal brain MRI. METHODS The input is fetal brain MRI volumes which may include the fetal body and the mother's abdomen. The outputs are the measurement values and reference slices on which the measurements were computed. The method, which follows the manual measurements principle, consists of five stages: (1) computation of a region of interest that includes the fetal brain with an anisotropic 3D U-Net classifier; (2) reference slice selection with a convolutional neural network; (3) slice-wise fetal brain structures segmentation with a multi-class U-Net classifier; (4) computation of the fetal brain midsagittal line and fetal brain orientation, and; (5) computation of the measurements. RESULTS Experimental results on 214 volumes for CBD, BBD and TCD measurements yielded a mean [Formula: see text] difference of 1.55 mm, 1.45 mm and 1.23 mm, respectively, and a Bland-Altman 95% confidence interval ([Formula: see text] of 3.92 mm, 3.98 mm and 2.25 mm, respectively. These results are similar to the manual inter-observer variability, and are consistent across gestational ages and brain conditions. CONCLUSIONS The proposed automatic method for computing biometric linear measurements of the fetal brain from MR imaging achieves human-level performance. It has the potential of being a useful method for the assessment of fetal brain biometry in normal and pathological cases, and of improving routine clinical practice.
Collapse
|
49
|
Ke R, Bugeau A, Papadakis N, Kirkland M, Schuetz P, Schonlieb CB. Multi-Task Deep Learning for Image Segmentation Using Recursive Approximation Tasks. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:3555-3567. [PMID: 33667164 DOI: 10.1109/tip.2021.3062726] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Fully supervised deep neural networks for segmentation usually require a massive amount of pixel-level labels which are manually expensive to create. In this work, we develop a multi-task learning method to relax this constraint. We regard the segmentation problem as a sequence of approximation subproblems that are recursively defined and in increasing levels of approximation accuracy. The subproblems are handled by a framework that consists of 1) a segmentation task that learns from pixel-level ground truth segmentation masks of a small fraction of the images, 2) a recursive approximation task that conducts partial object regions learning and data-driven mask evolution starting from partial masks of each object instance, and 3) other problem oriented auxiliary tasks that are trained with sparse annotations and promote the learning of dedicated features. Most training images are only labeled by (rough) partial masks, which do not contain exact object boundaries, rather than by their full segmentation masks. During the training phase, the approximation task learns the statistics of these partial masks, and the partial regions are recursively increased towards object boundaries aided by the learned information from the segmentation task in a fully data-driven fashion. The network is trained on an extremely small amount of precisely segmented images and a large set of coarse labels. Annotations can thus be obtained in a cheap way. We demonstrate the efficiency of our approach in three applications with microscopy images and ultrasound images.
Collapse
|
50
|
The Accuracy of Sonographic Fetal Head Circumference in Twin Pregnancies. JOURNAL OF OBSTETRICS AND GYNAECOLOGY CANADA 2021; 43:1159-1163. [PMID: 33621678 DOI: 10.1016/j.jogc.2021.02.114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2020] [Revised: 02/08/2021] [Accepted: 02/09/2021] [Indexed: 11/20/2022]
Abstract
OBJECTIVE To assess the accuracy of sonographic estimation of fetal head circumference in twin gestations. METHODS A retrospective analysis of sonographic evaluations of twin gestations >34 weeks, performed within 7 days of delivery, in a single university-affiliated medical centre. Sonographic head circumference was compared with neonatal head circumference. Measures of accuracy included systematic error, random error, proportion of estimates within 5% of neonatal head circumference, and reliability analysis. Accuracy of sonographic head circumference was compared between the first and second twin. RESULTS Overall, 103 twin gestations were evaluated at a median of 4 days before delivery. The majority of twins were dichorionic-diamniotic (83%). Median gestational age at delivery was 37 weeks, with a median birthweight of 2645 grams for the first twin and 2625 grams for the second twin. For all fetuses, median sonographic head circumference was lower than the neonatal head circumference (first twin: 317.5 vs. 330 mm; second twin: 318.4 vs. 330 mm, P > 0.05 for both). Measures of accuracy showed no significant difference between first and second twin. There was no difference in the number of sonographic head circumference evaluations that were within 5% of the neonatal head circumference between fetuses (64% for both twins). Cronbach α value was higher for the second twin (0.746 vs. 0.613), suggesting higher accuracy for head circumference measurement for the second twin. CONCLUSION In our cohort, sonographic head circumference underestimated postnatal head circumference. Accuracy measurements were not significantly different between the first and second twin.
Collapse
|