1
|
Chen H, Kumaralingam L, Zhang S, Song S, Zhang F, Zhang H, Pham TT, Punithakumar K, Lou EHM, Zhang Y, Le LH, Zheng R. Neural implicit surface reconstruction of freehand 3D ultrasound volume with geometric constraints. Med Image Anal 2024; 98:103305. [PMID: 39168075 DOI: 10.1016/j.media.2024.103305] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2024] [Revised: 07/11/2024] [Accepted: 08/09/2024] [Indexed: 08/23/2024]
Abstract
Three-dimensional (3D) freehand ultrasound (US) is a widely used imaging modality that allows non-invasive imaging of medical anatomy without radiation exposure. Surface reconstruction of US volume is vital to acquire the accurate anatomical structures needed for modeling, registration, and visualization. However, traditional methods cannot produce a high-quality surface due to image noise. Despite improvements in smoothness, continuity, and resolution from deep learning approaches, research on surface reconstruction in freehand 3D US is still limited. This study introduces FUNSR, a self-supervised neural implicit surface reconstruction method to learn signed distance functions (SDFs) from US volumes. In particular, FUNSR iteratively learns the SDFs by moving the 3D queries sampled around volumetric point clouds to approximate the surface, guided by two novel geometric constraints: sign consistency constraint and on-surface constraint with adversarial learning. Our approach has been thoroughly evaluated across four datasets to demonstrate its adaptability to various anatomical structures, including a hip phantom dataset, two vascular datasets and one publicly available prostate dataset. We also show that smooth and continuous representations greatly enhance the visual appearance of US data. Furthermore, we highlight the potential of our method to improve segmentation performance, and its robustness to noise distribution and motion perturbation.
Collapse
Affiliation(s)
- Hongbo Chen
- School of Information Science and Technology, ShanghaiTech University, Shanghai, 201210, China; Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai, 200050, China; University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Logiraj Kumaralingam
- Department of Radiology and Diagnostic Imaging, University of Alberta, Edmonton, Alberta, T6G 2R7, Canada
| | - Shuhang Zhang
- School of Information Science and Technology, ShanghaiTech University, Shanghai, 201210, China
| | - Sheng Song
- School of Information Science and Technology, ShanghaiTech University, Shanghai, 201210, China
| | - Fayi Zhang
- School of Information Science and Technology, ShanghaiTech University, Shanghai, 201210, China
| | - Haibin Zhang
- School of Information Science and Technology, ShanghaiTech University, Shanghai, 201210, China
| | - Thanh-Tu Pham
- Department of Radiology and Diagnostic Imaging, University of Alberta, Edmonton, Alberta, T6G 2R7, Canada; Department of Biomedical Engineering, University of Alberta, Edmonton, Alberta, T6G 2V2, Canada
| | - Kumaradevan Punithakumar
- Department of Radiology and Diagnostic Imaging, University of Alberta, Edmonton, Alberta, T6G 2R7, Canada
| | - Edmond H M Lou
- Department of Biomedical Engineering, University of Alberta, Edmonton, Alberta, T6G 2V2, Canada; Department of Electrical and Computer Engineering, University of Alberta, Edmonton, Alberta, T6G 1H9, Canada
| | - Yuyao Zhang
- School of Information Science and Technology, ShanghaiTech University, Shanghai, 201210, China
| | - Lawrence H Le
- Department of Radiology and Diagnostic Imaging, University of Alberta, Edmonton, Alberta, T6G 2R7, Canada; Department of Biomedical Engineering, University of Alberta, Edmonton, Alberta, T6G 2V2, Canada.
| | - Rui Zheng
- School of Information Science and Technology, ShanghaiTech University, Shanghai, 201210, China; Shanghai Engineering Research Center of Intelligent Vision and Imaging, ShanghaiTech University, Shanghai, 201210, China.
| |
Collapse
|
2
|
Freitas J, Gomes-Fonseca J, Tonelli AC, Correia-Pinto J, Fonseca JC, Queirós S. Automatic multi-view pose estimation in focused cardiac ultrasound. Med Image Anal 2024; 94:103146. [PMID: 38537416 DOI: 10.1016/j.media.2024.103146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 03/18/2024] [Accepted: 03/19/2024] [Indexed: 04/16/2024]
Abstract
Focused cardiac ultrasound (FoCUS) is a valuable point-of-care method for evaluating cardiovascular structures and function, but its scope is limited by equipment and operator's experience, resulting in primarily qualitative 2D exams. This study presents a novel framework to automatically estimate the 3D spatial relationship between standard FoCUS views. The proposed framework uses a multi-view U-Net-like fully convolutional neural network to regress line-based heatmaps representing the most likely areas of intersection between input images. The lines that best fit the regressed heatmaps are then extracted, and a system of nonlinear equations based on the intersection between view triplets is created and solved to determine the relative 3D pose between all input images. The feasibility and accuracy of the proposed pipeline were validated using a novel realistic in silico FoCUS dataset, demonstrating promising results. Interestingly, as shown in preliminary experiments, the estimation of the 2D images' relative poses enables the application of 3D image analysis methods and paves the way for 3D quantitative assessments in FoCUS examinations.
Collapse
Affiliation(s)
- João Freitas
- Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal; ICVS/3B's - PT Government Associate Laboratory, Braga/Guimarães, Portugal; Algoritmi Center, School of Engineering, University of Minho, Guimarães, Portugal
| | - João Gomes-Fonseca
- Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal; ICVS/3B's - PT Government Associate Laboratory, Braga/Guimarães, Portugal
| | | | - Jorge Correia-Pinto
- Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal; ICVS/3B's - PT Government Associate Laboratory, Braga/Guimarães, Portugal; Department of Pediatric Surgery, Hospital de Braga, Braga, Portugal
| | - Jaime C Fonseca
- Algoritmi Center, School of Engineering, University of Minho, Guimarães, Portugal
| | - Sandro Queirós
- Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal; ICVS/3B's - PT Government Associate Laboratory, Braga/Guimarães, Portugal.
| |
Collapse
|
3
|
Yeung PH, Hesse LS, Aliasi M, Haak MC, Xie W, Namburete AIL. Sensorless volumetric reconstruction of fetal brain freehand ultrasound scans with deep implicit representation. Med Image Anal 2024; 94:103147. [PMID: 38547665 DOI: 10.1016/j.media.2024.103147] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 02/14/2024] [Accepted: 03/20/2024] [Indexed: 04/16/2024]
Abstract
Three-dimensional (3D) ultrasound imaging has contributed to our understanding of fetal developmental processes by providing rich contextual information of the inherently 3D anatomies. However, its use is limited in clinical settings, due to the high purchasing costs and limited diagnostic practicality. Freehand 2D ultrasound imaging, in contrast, is routinely used in standard obstetric exams, but inherently lacks a 3D representation of the anatomies, which limits its potential for more advanced assessment. Such full representations are challenging to recover even with external tracking devices due to internal fetal movement which is independent from the operator-led trajectory of the probe. Capitalizing on the flexibility offered by freehand 2D ultrasound acquisition, we propose ImplicitVol to reconstruct 3D volumes from non-sensor-tracked 2D ultrasound sweeps. Conventionally, reconstructions are performed on a discrete voxel grid. We, however, employ a deep neural network to represent, for the first time, the reconstructed volume as an implicit function. Specifically, ImplicitVol takes a set of 2D images as input, predicts their locations in 3D space, jointly refines the inferred locations, and learns a full volumetric reconstruction. When testing natively-acquired and volume-sampled 2D ultrasound video sequences collected from different manufacturers, the 3D volumes reconstructed by ImplicitVol show significantly better visual and semantic quality than the existing interpolation-based reconstruction approaches. The inherent continuity of implicit representation also enables ImplicitVol to reconstruct the volume to arbitrarily high resolutions. As formulated, ImplicitVol has the potential to integrate seamlessly into the clinical workflow, while providing richer information for diagnosis and evaluation of the developing brain.
Collapse
Affiliation(s)
- Pak-Hei Yeung
- Department of Engineering Science, Institute of Biomedical Engineering, University of Oxford, Oxford, United Kingdom; Oxford Machine Learning in NeuroImaging Lab, Department of Computer Science, University of Oxford, OX1 3QD, United Kingdom.
| | - Linde S Hesse
- Department of Engineering Science, Institute of Biomedical Engineering, University of Oxford, Oxford, United Kingdom; Oxford Machine Learning in NeuroImaging Lab, Department of Computer Science, University of Oxford, OX1 3QD, United Kingdom
| | - Moska Aliasi
- Division of Fetal Medicine, Department of Obstetrics, Leiden University Medical Center, 2333 ZA Leiden, The Netherlands
| | - Monique C Haak
- Division of Fetal Medicine, Department of Obstetrics, Leiden University Medical Center, 2333 ZA Leiden, The Netherlands
| | - Weidi Xie
- Shanghai Jiao Tong University, Shanghai, 200240, China; Visual Geometry Group, Department of Engineering Science, University of Oxford, Oxford, United Kingdom
| | - Ana I L Namburete
- Oxford Machine Learning in NeuroImaging Lab, Department of Computer Science, University of Oxford, OX1 3QD, United Kingdom
| |
Collapse
|
4
|
Vece CD, Lous ML, Dromey B, Vasconcelos F, David AL, Peebles D, Stoyanov D. Ultrasound Plane Pose Regression: Assessing Generalized Pose Coordinates in the Fetal Brain. IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS 2024; 6:41-52. [PMID: 38881728 PMCID: PMC7616102 DOI: 10.1109/tmrb.2023.3328638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/18/2024]
Abstract
In obstetric ultrasound (US) scanning, the learner's ability to mentally build a three-dimensional (3D) map of the fetus from a two-dimensional (2D) US image represents a significant challenge in skill acquisition. We aim to build a US plane localization system for 3D visualization, training, and guidance without integrating additional sensors. This work builds on top of our previous work, which predicts the six-dimensional (6D) pose of arbitrarily oriented US planes slicing the fetal brain with respect to a normalized reference frame using a convolutional neural network (CNN) regression network. Here, we analyze in detail the assumptions of the normalized fetal brain reference frame and quantify its accuracy with respect to the acquisition of transventricular (TV) standard plane (SP) for fetal biometry. We investigate the impact of registration quality in the training and testing data and its subsequent effect on trained models. Finally, we introduce data augmentations and larger training sets that improve the results of our previous work, achieving median errors of 2.97 mm and 6.63° for translation and rotation, respectively.
Collapse
Affiliation(s)
- Chiara Di Vece
- EPSRC Center for Interventional and Surgical Sciences and the Department of Computer Science, University College London, WC1E 6DB London, U.K
| | - Maela Le Lous
- WEISS, Elizabeth Garrett Anderson Institute for Women's Health, and the NIHR University College London Hospitals Biomedical Research Center, University College London, WC1E 6DB London, U.K
| | - Brian Dromey
- WEISS, Elizabeth Garrett Anderson Institute for Women's Health, and the NIHR University College London Hospitals Biomedical Research Center, University College London, WC1E 6DB London, U.K
| | - Francisco Vasconcelos
- EPSRC Center for Interventional and Surgical Sciences and the Department of Computer Science, University College London, WC1E 6DB London, U.K
| | - Anna L David
- WEISS, Elizabeth Garrett Anderson Institute for Women's Health, and the NIHR University College London Hospitals Biomedical Research Center, University College London, WC1E 6DB London, U.K
| | - Donald Peebles
- WEISS, Elizabeth Garrett Anderson Institute for Women's Health, and the NIHR University College London Hospitals Biomedical Research Center, University College London, WC1E 6DB London, U.K
| | - Danail Stoyanov
- EPSRC Center for Interventional and Surgical Sciences and the Department of Computer Science, University College London, WC1E 6DB London, U.K
| |
Collapse
|
5
|
Chen Z, Zhuo W, Wang T, Cheng J, Xue W, Ni D. Semi-Supervised Representation Learning for Segmentation on Medical Volumes and Sequences. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3972-3986. [PMID: 37756175 DOI: 10.1109/tmi.2023.3319973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/29/2023]
Abstract
Benefiting from the massive labeled samples, deep learning-based segmentation methods have achieved great success for two dimensional natural images. However, it is still a challenging task to segment high dimensional medical volumes and sequences, due to the considerable efforts for clinical expertise to make large scale annotations. Self/semi-supervised learning methods have been shown to improve the performance by exploiting unlabeled data. However, they are still lack of mining local semantic discrimination and exploitation of volume/sequence structures. In this work, we propose a semi-supervised representation learning method with two novel modules to enhance the features in the encoder and decoder, respectively. For the encoder, based on the continuity between slices/frames and the common spatial layout of organs across subjects, we propose an asymmetric network with an attention-guided predictor to enable prediction between feature maps of different slices of unlabeled data. For the decoder, based on the semantic consistency between labeled data and unlabeled data, we introduce a novel semantic contrastive learning to regularize the feature maps in the decoder. The two parts are trained jointly with both labeled and unlabeled volumes/sequences in a semi-supervised manner. When evaluated on three benchmark datasets of medical volumes and sequences, our model outperforms existing methods with a large margin of 7.3% DSC on ACDC, 6.5% on Prostate, and 3.2% on CAMUS when only a few labeled data is available. Further, results on the M&M dataset show that the proposed method yields improvement without using any domain adaption techniques for data from unknown domain. Intensive evaluations reveal the effectiveness of representation mining, and superiority on performance of our method. The code is available at https://github.com/CcchenzJ/BootstrapRepresentation.
Collapse
|
6
|
Jost E, Kosian P, Jimenez Cruz J, Albarqouni S, Gembruch U, Strizek B, Recker F. Evolving the Era of 5D Ultrasound? A Systematic Literature Review on the Applications for Artificial Intelligence Ultrasound Imaging in Obstetrics and Gynecology. J Clin Med 2023; 12:6833. [PMID: 37959298 PMCID: PMC10649694 DOI: 10.3390/jcm12216833] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 10/17/2023] [Accepted: 10/25/2023] [Indexed: 11/15/2023] Open
Abstract
Artificial intelligence (AI) has gained prominence in medical imaging, particularly in obstetrics and gynecology (OB/GYN), where ultrasound (US) is the preferred method. It is considered cost effective and easily accessible but is time consuming and hindered by the need for specialized training. To overcome these limitations, AI models have been proposed for automated plane acquisition, anatomical measurements, and pathology detection. This study aims to overview recent literature on AI applications in OB/GYN US imaging, highlighting their benefits and limitations. For the methodology, a systematic literature search was performed in the PubMed and Cochrane Library databases. Matching abstracts were screened based on the PICOS (Participants, Intervention or Exposure, Comparison, Outcome, Study type) scheme. Articles with full text copies were distributed to the sections of OB/GYN and their research topics. As a result, this review includes 189 articles published from 1994 to 2023. Among these, 148 focus on obstetrics and 41 on gynecology. AI-assisted US applications span fetal biometry, echocardiography, or neurosonography, as well as the identification of adnexal and breast masses, and assessment of the endometrium and pelvic floor. To conclude, the applications for AI-assisted US in OB/GYN are abundant, especially in the subspecialty of obstetrics. However, while most studies focus on common application fields such as fetal biometry, this review outlines emerging and still experimental fields to promote further research.
Collapse
Affiliation(s)
- Elena Jost
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Philipp Kosian
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Jorge Jimenez Cruz
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Shadi Albarqouni
- Department of Diagnostic and Interventional Radiology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
- Helmholtz AI, Helmholtz Munich, Ingolstädter Landstraße 1, 85764 Neuherberg, Germany
| | - Ulrich Gembruch
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Brigitte Strizek
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Florian Recker
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| |
Collapse
|
7
|
Li Q, Shen Z, Li Q, Barratt DC, Dowrick T, Clarkson MJ, Vercauteren T, Hu Y. Long-term Dependency for 3D Reconstruction of Freehand Ultrasound Without External Tracker. IEEE Trans Biomed Eng 2023; PP:1033-1042. [PMID: 37856260 DOI: 10.1109/tbme.2023.3325551] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2023]
Abstract
OBJECTIVE Reconstructing freehand ultrasound in 3D without any external tracker has been a long-standing challenge in ultrasound-assisted procedures. We aim to define new ways of parameterising long-term dependencies, and evaluate the performance. METHODS First, long-term dependency is encoded by transformation positions within a frame sequence. This is achieved by combining a sequence model with a multi-transformation prediction. Second, two dependency factors are proposed, anatomical image content and scanning protocol, for contributing towards accurate reconstruction. Each factor is quantified experimentally by reducing respective training variances. RESULTS 1) The added long-term dependency up to 400 frames at 20 frames per second (fps) indeed improved reconstruction, with an up to 82.4% lowered accumulated error, compared with the baseline performance. The improvement was found to be dependent on sequence length, transformation interval and scanning protocol and, unexpectedly, not on the use of recurrent networks with long-short term modules; 2) Decreasing either anatomical or protocol variance in training led to poorer reconstruction accuracy. Interestingly, greater performance was gained from representative protocol patterns, than from representative anatomical features. CONCLUSION The proposed algorithm uses hyperparameter tuning to effectively utilise long-term dependency. The proposed dependency factors are of practical significance in collecting diverse training data, regulating scanning protocols and developing efficient networks. SIGNIFICANCE The proposed new methodology with publicly available volunteer data and code for parametersing the long-term dependency, experimentally shown to be valid sources of performance improvement, which could potentially lead to better model development and practical optimisation of the reconstruction application.
Collapse
|
8
|
Guo H, Xu X, Song X, Xu S, Chao H, Myers J, Turkbey B, Pinto PA, Wood BJ, Yan P. Ultrasound Frame-to-Volume Registration via Deep Learning for Interventional Guidance. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2023; 70:1016-1025. [PMID: 37015418 PMCID: PMC10502768 DOI: 10.1109/tuffc.2022.3229903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Fusing intraoperative 2-D ultrasound (US) frames with preoperative 3-D magnetic resonance (MR) images for guiding interventions has become the clinical gold standard in image-guided prostate cancer biopsy. However, developing an automatic image registration system for this application is challenging because of the modality gap between US/MR and the dimensionality gap between 2-D/3-D data. To overcome these challenges, we propose a novel US frame-to-volume registration (FVReg) pipeline to bridge the dimensionality gap between 2-D US frames and 3-D US volume. The developed pipeline is implemented using deep neural networks, which are fully automatic without requiring external tracking devices. The framework consists of three major components, including one) a frame-to-frame registration network (Frame2Frame) that estimates the current frame's 3-D spatial position based on previous video context, two) a frame-to-slice correction network (Frame2Slice) adjusting the estimated frame position using the 3-D US volumetric information, and three) a similarity filtering (SF) mechanism selecting the frame with the highest image similarity with the query frame. We validated our method on a clinical dataset with 618 subjects and tested its potential on real-time 2-D-US to 3-D-MR fusion navigation tasks. The proposed FVReg achieved an average target navigation error of 1.93 mm at 5-14 fps. Our source code is publicly available at https://github.com/DIAL-RPI/Frame-to-Volume-Registration.
Collapse
|
9
|
Lawley A, Hampson R, Worrall K, Dobie G. Prescriptive Method for Optimizing Cost of Data Collection and Annotation in Machine Learning of Clinical Ultrasound. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38082737 DOI: 10.1109/embc40787.2023.10340858] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Machine learning in medical ultrasound faces a major challenge: the prohibitive costs of producing and annotating clinical data. Optimizing the data collection and annotation will improve model training efficiency, reducing project cost and times. This paper prescribes a 2-phase method for cost optimization based on iterative accuracy/sample size predictions, and active learning for annotation optimization. METHODS Using public breast, fetal, and lung ultrasound datasets we can: Optimize data collection by statistically predicting accuracy for a desired dataset size; and optimize labeling efficiency using Active Learning, where predictions with lowest certainty were labelled manually using feedback. A practical case study on BUSI data was used to demonstrate the method prescribed in this work. RESULTS With small data subsets, ~10%, dataset size vs. final accuracy relations can be predicted with diminishing results after 50% usage. Manual annotation was reduced by ~10% using active learning to focus the annotation. CONCLUSION This led to cost reductions of 50%-66%, depending on requirements and initial cost model, on BUSI dataset with a negligible accuracy drop of 3.75% from theoretical maximums.Clinical Relevance- This work provides methodology to optimize dataset size and manual data labelling, this allows generation of cost-effective datasets, of interest to all, but particularly for financially limited trials and feasibility studies, Reducing the time burden on annotating clinicians.
Collapse
|
10
|
Bastiaansen WAP, Klein S, Koning AHJ, Niessen WJ, Steegers-Theunissen RPM, Rousian M. Computational methods for the analysis of early-pregnancy brain ultrasonography: a systematic review. EBioMedicine 2023; 89:104466. [PMID: 36796233 PMCID: PMC9958260 DOI: 10.1016/j.ebiom.2023.104466] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 01/09/2023] [Accepted: 01/23/2023] [Indexed: 02/16/2023] Open
Abstract
BACKGROUND Early screening of the brain is becoming routine clinical practice. Currently, this screening is performed by manual measurements and visual analysis, which is time-consuming and prone to errors. Computational methods may support this screening. Hence, the aim of this systematic review is to gain insight into future research directions needed to bring automated early-pregnancy ultrasound analysis of the human brain to clinical practice. METHODS We searched PubMed (Medline ALL Ovid), EMBASE, Web of Science Core Collection, Cochrane Central Register of Controlled Trials, and Google Scholar, from inception until June 2022. This study is registered in PROSPERO at CRD42020189888. Studies about computational methods for the analysis of human brain ultrasonography acquired before the 20th week of pregnancy were included. The key reported attributes were: level of automation, learning-based or not, the usage of clinical routine data depicting normal and abnormal brain development, public sharing of program source code and data, and analysis of the confounding factors. FINDINGS Our search identified 2575 studies, of which 55 were included. 76% used an automatic method, 62% a learning-based method, 45% used clinical routine data and in addition, for 13% the data depicted abnormal development. None of the studies shared publicly the program source code and only two studies shared the data. Finally, 35% did not analyse the influence of confounding factors. INTERPRETATION Our review showed an interest in automatic, learning-based methods. To bring these methods to clinical practice we recommend that studies: use routine clinical data depicting both normal and abnormal development, make their dataset and program source code publicly available, and be attentive to the influence of confounding factors. Introduction of automated computational methods for early-pregnancy brain ultrasonography will save valuable time during screening, and ultimately lead to better detection, treatment and prevention of neuro-developmental disorders. FUNDING The Erasmus MC Medical Research Advisor Committee (grant number: FB 379283).
Collapse
Affiliation(s)
- Wietske A P Bastiaansen
- Department of Obstetrics and Gynecology, Erasmus MC, University Medical Center, Rotterdam, the Netherlands; Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, University Medical Center, Rotterdam, the Netherlands
| | - Stefan Klein
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, University Medical Center, Rotterdam, the Netherlands
| | - Anton H J Koning
- Department of Pathology, Erasmus MC, University Medical Center, Rotterdam, the Netherlands
| | - Wiro J Niessen
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, University Medical Center, Rotterdam, the Netherlands
| | | | - Melek Rousian
- Department of Obstetrics and Gynecology, Erasmus MC, University Medical Center, Rotterdam, the Netherlands.
| |
Collapse
|
11
|
Caspi Y, de Zwarte SMC, Iemenschot IJ, Lumbreras R, de Heus R, Bekker MN, Hulshoff Pol H. Automatic measurements of fetal intracranial volume from 3D ultrasound scans. FRONTIERS IN NEUROIMAGING 2022; 1:996702. [PMID: 37555155 PMCID: PMC10406279 DOI: 10.3389/fnimg.2022.996702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Accepted: 09/15/2022] [Indexed: 08/10/2023]
Abstract
Three-dimensional fetal ultrasound is commonly used to study the volumetric development of brain structures. To date, only a limited number of automatic procedures for delineating the intracranial volume exist. Hence, intracranial volume measurements from three-dimensional ultrasound images are predominantly performed manually. Here, we present and validate an automated tool to extract the intracranial volume from three-dimensional fetal ultrasound scans. The procedure is based on the registration of a brain model to a subject brain. The intracranial volume of the subject is measured by applying the inverse of the final transformation to an intracranial mask of the brain model. The automatic measurements showed a high correlation with manual delineation of the same subjects at two gestational ages, namely, around 20 and 30 weeks (linear fitting R2(20 weeks) = 0.88, R2(30 weeks) = 0.77; Intraclass Correlation Coefficients: 20 weeks=0.94, 30 weeks = 0.84). Overall, the automatic intracranial volumes were larger than the manually delineated ones (84 ± 16 vs. 76 ± 15 cm3; and 274 ± 35 vs. 237 ± 28 cm3), probably due to differences in cerebellum delineation. Notably, the automated measurements reproduced both the non-linear pattern of fetal brain growth and the increased inter-subject variability for older fetuses. By contrast, there was some disagreement between the manual and automatic delineation concerning the size of sexual dimorphism differences. The method presented here provides a relatively efficient way to delineate volumes of fetal brain structures like the intracranial volume automatically. It can be used as a research tool to investigate these structures in large cohorts, which will ultimately aid in understanding fetal structural human brain development.
Collapse
Affiliation(s)
- Yaron Caspi
- Department of Psychiatry, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, Netherlands
| | - Sonja M. C. de Zwarte
- Department of Psychiatry, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, Netherlands
| | - Iris J. Iemenschot
- Department of Psychiatry, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, Netherlands
| | - Raquel Lumbreras
- Department of Psychiatry, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, Netherlands
| | - Roel de Heus
- Department of Obstetrics and Gynaecology, St. Antonius Hospital, Utrecht, Netherlands
- Department of Obstetrics, University Medical Center Utrecht, Utrecht, Netherlands
| | - Mireille N. Bekker
- Department of Obstetrics, University Medical Center Utrecht, Utrecht, Netherlands
| | - Hilleke Hulshoff Pol
- Department of Psychiatry, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, Netherlands
- Department of Psychology, Utrecht University, Utrecht, Netherlands
| |
Collapse
|
12
|
Xu J, Moyer D, Grant PE, Golland P, Iglesias JE, Adalsteinsson E. SVoRT: Iterative Transformer for Slice-to-Volume Registration in Fetal Brain MRI. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2022; 13436:3-13. [PMID: 37103480 PMCID: PMC10129054 DOI: 10.1007/978-3-031-16446-0_1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/28/2023]
Abstract
Volumetric reconstruction of fetal brains from multiple stacks of MR slices, acquired in the presence of almost unpredictable and often severe subject motion, is a challenging task that is highly sensitive to the initialization of slice-to-volume transformations. We propose a novel slice-to-volume registration method using Transformers trained on synthetically transformed data, which model multiple stacks of MR slices as a sequence. With the attention mechanism, our model automatically detects the relevance between slices and predicts the transformation of one slice using information from other slices. We also estimate the underlying 3D volume to assist slice-to-volume registration and update the volume and transformations alternately to improve accuracy. Results on synthetic data show that our method achieves lower registration error and better reconstruction quality compared with existing state-of-the-art methods. Experiments with real-world MRI data are also performed to demonstrate the ability of the proposed model to improve the quality of 3D reconstruction under severe fetal motion.
Collapse
Affiliation(s)
- Junshen Xu
- Department of Electrical Engineering and Computer Science, MIT, Cambridge, MA, USA
| | - Daniel Moyer
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| | - P Ellen Grant
- Fetal-Neonatal Neuroimaging and Developmental Science Center, Boston Children's Hospital, Boston, MA, USA
- Harvard Medical School, Boston, MA, USA
| | - Polina Golland
- Department of Electrical Engineering and Computer Science, MIT, Cambridge, MA, USA
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| | - Juan Eugenio Iglesias
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
- Harvard Medical School, Boston, MA, USA
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, UK
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, USA
| | - Elfar Adalsteinsson
- Department of Electrical Engineering and Computer Science, MIT, Cambridge, MA, USA
- Institute for Medical Engineering and Science, MIT, Cambridge, MA, USA
| |
Collapse
|
13
|
Yu Y, Chen Z, Zhuang Y, Yi H, Han L, Chen K, Lin J. A guiding approach of Ultrasound scan for accurately obtaining standard diagnostic planes of fetal brain malformation. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2022; 30:1243-1260. [PMID: 36155489 DOI: 10.3233/xst-221278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
BACKGROUND Standard planes (SPs) are crucial for the diagnosis of fetal brain malformation. However, it is very time-consuming and requires extensive experiences to acquire the SPs accurately due to the large difference in fetal posture and the complexity of SPs definitions. OBJECTIVE This study aims to present a guiding approach that could assist sonographer to obtain the SPs more accurately and more quickly. METHODS To begin with, sonographer uses the 3D probe to scan the fetal head to obtain 3D volume data, and then we used affine transformation to calibrate 3D volume data to the standard body position and established the corresponding 3D head model in 'real time'. When the sonographer uses the 2D probe to scan a plane, the position of current plane can be clearly show in 3D head model by our RLNet (regression location network), which can conduct the sonographer to obtain the three SPs more accurately. When the three SPs are located, the sagittal plane and the coronal planes can be automatically generated according to the spatial relationship with the three SPs. RESULTS Experimental results conducted on 3200 2D US images show that the RLNet achieves average angle error of the transthalamic plane was 3.91±2.86°, which has a obvious improvement compared other published data. The automatically generated coronal and sagittal SPs conform the diagnostic criteria and the diagnostic requirements of fetal brain malformation. CONCLUSIONS A guiding scanning method based deep learning for ultrasonic brain malformation screening is firstly proposed and it has a pragmatic value for future clinical application.
Collapse
Affiliation(s)
- Yalan Yu
- College of Biomedical Engineering, Sichuan University, Chengdu, China
| | - Zhong Chen
- Department of US, General Hospital of Western Theater Command, Chengdu, China
| | - Yan Zhuang
- College of Biomedical Engineering, Sichuan University, Chengdu, China
| | - Heng Yi
- Department of US, General Hospital of Western Theater Command, Chengdu, China
| | - Lin Han
- College of Biomedical Engineering, Sichuan University, Chengdu, China
- Haihong Intellimage Medical Technology (Tianjin) Co., Ltd, Tianjin, China
| | - Ke Chen
- College of Biomedical Engineering, Sichuan University, Chengdu, China
| | - Jiangli Lin
- College of Biomedical Engineering, Sichuan University, Chengdu, China
| |
Collapse
|