1
|
Jost E, Kosian P, Jimenez Cruz J, Albarqouni S, Gembruch U, Strizek B, Recker F. Evolving the Era of 5D Ultrasound? A Systematic Literature Review on the Applications for Artificial Intelligence Ultrasound Imaging in Obstetrics and Gynecology. J Clin Med 2023; 12:6833. [PMID: 37959298 PMCID: PMC10649694 DOI: 10.3390/jcm12216833] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 10/17/2023] [Accepted: 10/25/2023] [Indexed: 11/15/2023] Open
Abstract
Artificial intelligence (AI) has gained prominence in medical imaging, particularly in obstetrics and gynecology (OB/GYN), where ultrasound (US) is the preferred method. It is considered cost effective and easily accessible but is time consuming and hindered by the need for specialized training. To overcome these limitations, AI models have been proposed for automated plane acquisition, anatomical measurements, and pathology detection. This study aims to overview recent literature on AI applications in OB/GYN US imaging, highlighting their benefits and limitations. For the methodology, a systematic literature search was performed in the PubMed and Cochrane Library databases. Matching abstracts were screened based on the PICOS (Participants, Intervention or Exposure, Comparison, Outcome, Study type) scheme. Articles with full text copies were distributed to the sections of OB/GYN and their research topics. As a result, this review includes 189 articles published from 1994 to 2023. Among these, 148 focus on obstetrics and 41 on gynecology. AI-assisted US applications span fetal biometry, echocardiography, or neurosonography, as well as the identification of adnexal and breast masses, and assessment of the endometrium and pelvic floor. To conclude, the applications for AI-assisted US in OB/GYN are abundant, especially in the subspecialty of obstetrics. However, while most studies focus on common application fields such as fetal biometry, this review outlines emerging and still experimental fields to promote further research.
Collapse
Affiliation(s)
- Elena Jost
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Philipp Kosian
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Jorge Jimenez Cruz
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Shadi Albarqouni
- Department of Diagnostic and Interventional Radiology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
- Helmholtz AI, Helmholtz Munich, Ingolstädter Landstraße 1, 85764 Neuherberg, Germany
| | - Ulrich Gembruch
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Brigitte Strizek
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Florian Recker
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| |
Collapse
|
2
|
Horgan R, Nehme L, Abuhamad A. Artificial intelligence in obstetric ultrasound: A scoping review. Prenat Diagn 2023; 43:1176-1219. [PMID: 37503802 DOI: 10.1002/pd.6411] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 06/05/2023] [Accepted: 07/17/2023] [Indexed: 07/29/2023]
Abstract
The objective is to summarize the current use of artificial intelligence (AI) in obstetric ultrasound. PubMed, Cochrane Library, and ClinicalTrials.gov databases were searched using the following keywords "neural networks", OR "artificial intelligence", OR "machine learning", OR "deep learning", AND "obstetrics", OR "obstetrical", OR "fetus", OR "foetus", OR "fetal", OR "foetal", OR "pregnancy", or "pregnant", AND "ultrasound" from inception through May 2022. The search was limited to the English language. Studies were eligible for inclusion if they described the use of AI in obstetric ultrasound. Obstetric ultrasound was defined as the process of obtaining ultrasound images of a fetus, amniotic fluid, or placenta. AI was defined as the use of neural networks, machine learning, or deep learning methods. The authors' search identified a total of 127 papers that fulfilled our inclusion criteria. The current uses of AI in obstetric ultrasound include first trimester pregnancy ultrasound, assessment of placenta, fetal biometry, fetal echocardiography, fetal neurosonography, assessment of fetal anatomy, and other uses including assessment of fetal lung maturity and screening for risk of adverse pregnancy outcomes. AI holds the potential to improve the ultrasound efficiency, pregnancy outcomes in low resource settings, detection of congenital malformations and prediction of adverse pregnancy outcomes.
Collapse
Affiliation(s)
- Rebecca Horgan
- Division of Maternal Fetal Medicine, Department of Obstetrics & Gynecology, Eastern Virginia Medical School, Norfolk, Virginia, USA
| | - Lea Nehme
- Division of Maternal Fetal Medicine, Department of Obstetrics & Gynecology, Eastern Virginia Medical School, Norfolk, Virginia, USA
| | - Alfred Abuhamad
- Division of Maternal Fetal Medicine, Department of Obstetrics & Gynecology, Eastern Virginia Medical School, Norfolk, Virginia, USA
| |
Collapse
|
3
|
Sun H, Jiao J, Ren Y, Guo Y, Wang Y. Multimodal fusion model for classifying placenta ultrasound imaging in pregnancies with hypertension disorders. Pregnancy Hypertens 2023; 31:46-53. [PMID: 36577178 DOI: 10.1016/j.preghy.2022.12.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Revised: 11/24/2022] [Accepted: 12/14/2022] [Indexed: 12/23/2022]
Abstract
BACKGROUND A multimodal fusion model was proposed to assist the traditional visual diagnosis in evaluating the placental features of hypertension disorders of pregnancy (HDP). OBJECTIVE The aim of this study was to analyse and compare the placental features between normal and HDP pregnancies and propose a multimodal fusion deep learning model for differentiating and characterizing the placental features from HDP to normal pregnancy. METHODS This observational prospective study included 654 pregnant women, including 75 with HDPs. Grayscale ultrasound images (GSIs) and Microflow images (MFIs) of the placentas were collected from all patients during routine obstetric examinations. On the basis of intelligent extraction and features fusion, after quantities of training and optimization, the classification model named GMNet (the intelligent network based on GSIs and MFIs) was introduced for differentiating the placental features of normal and HDP pregnancies. The distributions of placental features extracted by the deep convolutional neural networks (DCNNs) were visualized by Uniform Manifold Approximation and Projection for Dimension Reduction (UMAP). Metrics including sensitivity, specificity, accuracy, and the area under the curve (AUC) were used to score the model. Finally, placental tissue samples were randomly selected for microscopic analyses to prove the interpretability and effectiveness of the GMNet model. RESULTS Compared with the Normal group in ultrasonic images, the light spots were rougher and the parts with focal cystic or hypoechogenic lesions were increased in the HDP groups. The overall diagnostic performance of the GMNet model depending on the region of interest (ROI) was excellent (AUC: 97%), with a sensitivity of 90.0%, a specificity of 93.5%, and an accuracy of 93.1%. The fusion features of GSIs and MFIs in the placenta showed a higher discriminative power than single-mode features (fusion features vs GSI features vs MFI features, 97.0% vs 91.2% vs 94.8%). Furthermore, according to the microscopic analysis, unevenly distributed villi, increased syncyte nodules and aggregated intervillous cellulose deposition were particularly frequent in the HDP cases. CONCLUSIONS The GMNet model could sensitively identify abnormal changes in the placental microstructure in pregnancies with HDP.
Collapse
Affiliation(s)
- Hongshuang Sun
- Obstetrics and Gynecology Hospital of Fudan University, No.128, Shenyang Road, Shanghai 200090, China
| | - Jing Jiao
- Department of Electronic Engineering, Fudan University, No. 220, Handan Road, Yangpu District, Shanghai 200433, China; Key Laboratory of Medical Imaging, Computing and Computer-Assisted Intervention, Shanghai, China
| | - Yunyun Ren
- Obstetrics and Gynecology Hospital of Fudan University, No.128, Shenyang Road, Shanghai 200090, China.
| | - Yi Guo
- Department of Electronic Engineering, Fudan University, No. 220, Handan Road, Yangpu District, Shanghai 200433, China; Key Laboratory of Medical Imaging, Computing and Computer-Assisted Intervention, Shanghai, China.
| | - Yuanyuan Wang
- Department of Electronic Engineering, Fudan University, No. 220, Handan Road, Yangpu District, Shanghai 200433, China; Key Laboratory of Medical Imaging, Computing and Computer-Assisted Intervention, Shanghai, China.
| |
Collapse
|
4
|
He F, Wang Y, Xiu Y, Zhang Y, Chen L. Artificial Intelligence in Prenatal Ultrasound Diagnosis. Front Med (Lausanne) 2021; 8:729978. [PMID: 34977053 PMCID: PMC8716504 DOI: 10.3389/fmed.2021.729978] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Accepted: 11/29/2021] [Indexed: 12/12/2022] Open
Abstract
The application of artificial intelligence (AI) technology to medical imaging has resulted in great breakthroughs. Given the unique position of ultrasound (US) in prenatal screening, the research on AI in prenatal US has practical significance with its application to prenatal US diagnosis improving work efficiency, providing quantitative assessments, standardizing measurements, improving diagnostic accuracy, and automating image quality control. This review provides an overview of recent studies that have applied AI technology to prenatal US diagnosis and explains the challenges encountered in these applications.
Collapse
Affiliation(s)
| | | | | | | | - Lizhu Chen
- Department of Ultrasound, Shengjing Hospital of China Medical University, Shenyang, China
| |
Collapse
|
5
|
Davidson L, Boland MR. Towards deep phenotyping pregnancy: a systematic review on artificial intelligence and machine learning methods to improve pregnancy outcomes. Brief Bioinform 2021; 22:6065792. [PMID: 33406530 PMCID: PMC8424395 DOI: 10.1093/bib/bbaa369] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2020] [Revised: 10/13/2020] [Accepted: 11/18/2020] [Indexed: 12/16/2022] Open
Abstract
Objective Development of novel informatics methods focused on improving pregnancy outcomes remains an active area of research. The purpose of this study is to systematically review the ways that artificial intelligence (AI) and machine learning (ML), including deep learning (DL), methodologies can inform patient care during pregnancy and improve outcomes. Materials and methods We searched English articles on EMBASE, PubMed and SCOPUS. Search terms included ML, AI, pregnancy and informatics. We included research articles and book chapters, excluding conference papers, editorials and notes. Results We identified 127 distinct studies from our queries that were relevant to our topic and included in the review. We found that supervised learning methods were more popular (n = 69) than unsupervised methods (n = 9). Popular methods included support vector machines (n = 30), artificial neural networks (n = 22), regression analysis (n = 17) and random forests (n = 16). Methods such as DL are beginning to gain traction (n = 13). Common areas within the pregnancy domain where AI and ML methods were used the most include prenatal care (e.g. fetal anomalies, placental functioning) (n = 73); perinatal care, birth and delivery (n = 20); and preterm birth (n = 13). Efforts to translate AI into clinical care include clinical decision support systems (n = 24) and mobile health applications (n = 9). Conclusions Overall, we found that ML and AI methods are being employed to optimize pregnancy outcomes, including modern DL methods (n = 13). Future research should focus on less-studied pregnancy domain areas, including postnatal and postpartum care (n = 2). Also, more work on clinical adoption of AI methods and the ethical implications of such adoption is needed.
Collapse
Affiliation(s)
- Lena Davidson
- MS degree at College of St. Scholastica, Duluth, MN, USA
| | - Mary Regina Boland
- Department of Biostatistics, Epidemiology, and Informatics at the University of Pennsylvania
| |
Collapse
|
6
|
Yu Z, Jiang X, Zhou F, Qin J, Ni D, Chen S, Lei B, Wang T. Melanoma Recognition in Dermoscopy Images via Aggregated Deep Convolutional Features. IEEE Trans Biomed Eng 2019; 66:1006-1016. [DOI: 10.1109/tbme.2018.2866166] [Citation(s) in RCA: 107] [Impact Index Per Article: 21.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
|
7
|
Kaur P, Singh G, Kaur P. An intelligent validation system for diagnostic and prognosis of ultrasound fetal growth analysis using Neuro-Fuzzy based on genetic algorithm. EGYPTIAN INFORMATICS JOURNAL 2019. [DOI: 10.1016/j.eij.2018.10.002] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
8
|
Fetal facial standard plane recognition via very deep convolutional networks. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2017; 2016:627-630. [PMID: 28268406 DOI: 10.1109/embc.2016.7590780] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The accurate recognition of fetal facial standard plane (FFSP) (i.e., axial, coronal and sagittal plane) from ultrasound (US) images is quite essential for routine US examination. Since the labor-intensive and subjective measurement is too time-consuming and unreliable, the development of the automatic FFSP recognition method is highly desirable. Different from the previous methods, we leverage a general framework to recognize the FFSP from US images automatically. Specifically, instead of using the previous hand-crafted visual features, we utilize the recent developed deep learning approach via very deep convolutional networks (DCNN) architecture to represent fine-grained details of US image. Also, very small (3×3) convolution filters are adopted to improve the performance. The evaluation of our FFSP dataset shows the superiority of our method over the previous studies and achieves the state-of-the-art FFSP recognition results.
Collapse
|
9
|
Lei B, Liu Y, Dong C, Chen X, Zhang X, Diao X, Yang G, Liu J, Yao S, Li H, Yuan J, Li S, Le X, Lin Y, Zeng W. Assessment of liver fibrosis in chronic hepatitis B via multimodal data. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2016.09.128] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
|
10
|
Yu Z, Tan EL, Ni D, Qin J, Chen S, Li S, Lei B, Wang T. A Deep Convolutional Neural Network-Based Framework for Automatic Fetal Facial Standard Plane Recognition. IEEE J Biomed Health Inform 2017; 22:874-885. [PMID: 28534800 DOI: 10.1109/jbhi.2017.2705031] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Ultrasound imaging has become a prevalent examination method in prenatal diagnosis. Accurate acquisition of fetal facial standard plane (FFSP) is the most important precondition for subsequent diagnosis and measurement. In the past few years, considerable effort has been devoted to FFSP recognition using various hand-crafted features, but the recognition performance is still unsatisfactory due to the high intraclass variation of FFSPs and the high degree of visual similarity between FFSPs and other non-FFSPs. To improve the recognition performance, we propose a method to automatically recognize FFSP via a deep convolutional neural network (DCNN) architecture. The proposed DCNN consists of 16 convolutional layers with small 3 × 3 size kernels and three fully connected layers. A global average pooling is adopted in the last pooling layer to significantly reduce network parameters, which alleviates the overfitting problems and improves the performance under limited training data. Both the transfer learning strategy and a data augmentation technique tailored for FFSP are implemented to further boost the recognition performance. Extensive experiments demonstrate the advantage of our proposed method over traditional approaches and the effectiveness of DCNN to recognize FFSP for clinical diagnosis.
Collapse
|
11
|
Lei B, Yang P, Wang T, Chen S, Ni D. Relational-Regularized Discriminative Sparse Learning for Alzheimer's Disease Diagnosis. IEEE TRANSACTIONS ON CYBERNETICS 2017; 47:1102-1113. [PMID: 28092591 DOI: 10.1109/tcyb.2016.2644718] [Citation(s) in RCA: 38] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Accurate identification and understanding informative feature is important for early Alzheimer's disease (AD) prognosis and diagnosis. In this paper, we propose a novel discriminative sparse learning method with relational regularization to jointly predict the clinical score and classify AD disease stages using multimodal features. Specifically, we apply a discriminative learning technique to expand the class-specific difference and include geometric information for effective feature selection. In addition, two kind of relational information are incorporated to explore the intrinsic relationships among features and training subjects in terms of similarity learning. We map the original feature into the target space to identify the informative and predictive features by sparse learning technique. A unique loss function is designed to include both discriminative learning and relational regularization methods. Experimental results based on a total of 805 subjects [including 226 AD patients, 393 mild cognitive impairment (MCI) subjects, and 186 normal controls (NCs)] from AD neuroimaging initiative database show that the proposed method can obtain a classification accuracy of 94.68% for AD versus NC, 80.32% for MCI versus NC, and 74.58% for progressive MCI versus stable MCI, respectively. In addition, we achieve remarkable performance for the clinical scores prediction and classification label identification, which has efficacy for AD disease diagnosis and prognosis. The algorithm comparison demonstrates the effectiveness of the introduced learning techniques and superiority over the state-of-the-arts methods.
Collapse
|
12
|
Lei B, Jiang F, Chen S, Ni D, Wang T. Longitudinal Analysis for Disease Progression via Simultaneous Multi-Relational Temporal-Fused Learning. Front Aging Neurosci 2017; 9:6. [PMID: 28316569 PMCID: PMC5335657 DOI: 10.3389/fnagi.2017.00006] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2016] [Accepted: 01/11/2017] [Indexed: 01/21/2023] Open
Abstract
It is highly desirable to predict the progression of Alzheimer's disease (AD) of patients [e.g., to predict conversion of mild cognitive impairment (MCI) to AD], especially longitudinal prediction of AD is important for its early diagnosis. Currently, most existing methods predict different clinical scores using different models, or separately predict multiple scores at different future time points. Such approaches prevent coordinated learning of multiple predictions that can be used to jointly predict clinical scores at multiple future time points. In this paper, we propose a joint learning method for predicting clinical scores of patients using multiple longitudinal prediction models for various future time points. Three important relationships among training samples, features, and clinical scores are explored. The relationship among different longitudinal prediction models is captured using a common feature set among the multiple prediction models at different time points. Our experimental results based on the Alzheimer's disease neuroimaging initiative (ADNI) database shows that our method achieves considerable improvement over competing methods in predicting multiple clinical scores.
Collapse
Affiliation(s)
- Baiying Lei
- School of Biomedical Engineering, Shenzhen UniversityShenzhen, China
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Shenzhen UniversityShenzhen, China
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen UniversityShenzhen, China
- Fujian Provincial Key Laboratory of Information Processing and Intelligent Control, Minjiang UniversityFuzhou, China
| | - Feng Jiang
- School of Biomedical Engineering, Shenzhen UniversityShenzhen, China
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Shenzhen UniversityShenzhen, China
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen UniversityShenzhen, China
| | - Siping Chen
- School of Biomedical Engineering, Shenzhen UniversityShenzhen, China
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Shenzhen UniversityShenzhen, China
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen UniversityShenzhen, China
| | - Dong Ni
- School of Biomedical Engineering, Shenzhen UniversityShenzhen, China
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Shenzhen UniversityShenzhen, China
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen UniversityShenzhen, China
| | - Tianfu Wang
- School of Biomedical Engineering, Shenzhen UniversityShenzhen, China
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Shenzhen UniversityShenzhen, China
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen UniversityShenzhen, China
| |
Collapse
|
13
|
|
14
|
Lei B, Chen S, Ni D, Wang T. Discriminative Learning for Alzheimer's Disease Diagnosis via Canonical Correlation Analysis and Multimodal Fusion. Front Aging Neurosci 2016; 8:77. [PMID: 27242506 PMCID: PMC4868852 DOI: 10.3389/fnagi.2016.00077] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2016] [Accepted: 03/29/2016] [Indexed: 01/03/2023] Open
Abstract
To address the challenging task of diagnosing neurodegenerative brain disease, such as Alzheimer's disease (AD) and mild cognitive impairment (MCI), we propose a novel method using discriminative feature learning and canonical correlation analysis (CCA) in this paper. Specifically, multimodal features and their CCA projections are concatenated together to represent each subject, and hence both individual and shared information of AD disease are captured. A discriminative learning with multilayer feature hierarchy is designed to further improve performance. Also, hybrid representation is proposed to maximally explore data from multiple modalities. A novel normalization method is devised to tackle the intra- and inter-subject variations from the multimodal data. Based on our extensive experiments, our method achieves an accuracy of 96.93% [AD vs. normal control (NC)], 86.57 % (MCI vs. NC), and 82.75% [MCI converter (MCI-C) vs. MCI non-converter (MCI-NC)], respectively, which outperforms the state-of-the-art methods in the literature.
Collapse
Affiliation(s)
- Baiying Lei
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Shenzhen, China
| | - Siping Chen
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Shenzhen, China
| | - Dong Ni
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Shenzhen, China
| | - Tianfu Wang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Shenzhen, China
| |
Collapse
|