1
|
Kasera B, Shinar S, Edke P, Pruthi V, Goldenberg A, Erdman L, Van Mieghem T. Deep-learning computer vision can identify increased nuchal translucency in the first trimester of pregnancy. Prenat Diagn 2024; 44:535-543. [PMID: 38558081 DOI: 10.1002/pd.6559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Revised: 02/21/2024] [Accepted: 03/06/2024] [Indexed: 04/04/2024]
Abstract
OBJECTIVE Many fetal anomalies can already be diagnosed by ultrasound in the first trimester of pregnancy. Unfortunately, in clinical practice, detection rates for anomalies in early pregnancy remain low. Our aim was to use an automated image segmentation algorithm to detect one of the most common fetal anomalies: a thickened nuchal translucency (NT), which is a marker for genetic and structural anomalies. METHODS Standardized mid-sagittal ultrasound images of the fetal head and chest were collected for 560 fetuses between 11 and 13 weeks and 6 days of gestation, 88 (15.7%) of whom had an NT thicker than 3.5 mm. Image quality was graded as high or low by two fetal medicine experts. Images were divided into a training-set (n = 451, 55 thick NT) and a test-set (n = 109, 33 thick NT). We then trained a U-Net convolutional neural network to segment the fetus and the NT region and computed the NT:fetus ratio of these regions. The ability of this ratio to separate thick (anomalous) NT regions from healthy, typical NT regions was first evaluated in ground-truth segmentation to validate the metric and then with predicted segmentation to validate our algorithm, both using the area under the receiver operator curve (AUROC). RESULTS The ground-truth NT:fetus ratio detected thick NTs with 0.97 AUROC in both the training and test sets. The fetus and NT regions were detected with a Dice score of 0.94 in the test set. The NT:fetus ratio based on model segmentation detected thick NTs with an AUROC of 0.96 relative to clinician labels. At a 91% specificity, 94% of thick NT cases were detected (sensitivity) in the test set. The detection rate was statistically higher (p = 0.003) in high versus low-quality images (AUROC 0.98 vs. 0.90, respectively). CONCLUSION Our model provides an explainable deep-learning method for detecting increased NT. This technique can be used to screen for other fetal anomalies in the first trimester of pregnancy.
Collapse
Affiliation(s)
- Bhavya Kasera
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Division of Genetics and Genome Biology, Hospital for Sick Children, Toronto, Ontario, Canada
| | - Shiri Shinar
- Department of Obstetrics and Gynaecology, Fetal Medicine Unit, Mount Sinai Hospital and University of Toronto, Toronto, Ontario, Canada
- Ontario Fetal Centre, Toronto, Ontario, Canada
| | - Parinita Edke
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Division of Genetics and Genome Biology, Hospital for Sick Children, Toronto, Ontario, Canada
| | - Vagisha Pruthi
- Department of Obstetrics and Gynaecology, Fetal Medicine Unit, Mount Sinai Hospital and University of Toronto, Toronto, Ontario, Canada
| | - Anna Goldenberg
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Division of Genetics and Genome Biology, Hospital for Sick Children, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
- CIFAR, Toronto, Ontario, Canada
| | - Lauren Erdman
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Division of Genetics and Genome Biology, Hospital for Sick Children, Toronto, Ontario, Canada
- James M. Anderson Center for Health Systems Excellence, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA
- Center for Computational Medicine, Hospital for Sick Children, Toronto, Ontario, Canada
| | - Tim Van Mieghem
- Department of Obstetrics and Gynaecology, Fetal Medicine Unit, Mount Sinai Hospital and University of Toronto, Toronto, Ontario, Canada
- Ontario Fetal Centre, Toronto, Ontario, Canada
| |
Collapse
|
2
|
Peng Y, Luo Y, Yan J, Li W, Liao Y, Yan L, Ling H, Long C. Automatic measurement of fetal anterior neck lower jaw angle in nuchal translucency scans. Sci Rep 2024; 14:5351. [PMID: 38438512 PMCID: PMC10912614 DOI: 10.1038/s41598-024-55974-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2023] [Accepted: 02/29/2024] [Indexed: 03/06/2024] Open
Abstract
This study aims at suggesting an end-to-end algorithm based on a U-net-optimized generative adversarial network to predict anterior neck lower jaw angles (ANLJA), which are employed to define fetal head posture (FHP) during nuchal translucency (NT) measurement. We prospectively collected 720 FHP images (half hyperextension and half normal posture) and regarded manual measurement as the gold standard. Seventy percent of the FHP images (half hyperextension and half normal posture) were used to fit models, and the rest to evaluate them in the hyperextension group, normal posture group (NPG), and total group. The root mean square error, explained variation, and mean absolute percentage error (MAPE) were utilized for the validity assessment; the two-sample t test, Mann-Whitney U test, Wilcoxon signed-rank test, Bland-Altman plot, and intraclass correlation coefficient (ICC) for the reliability evaluation. Our suggested algorithm outperformed all the competitors in all groups and indices regarding validity, except for the MAPE, where the Inception-v3 surpassed ours in the NPG. The two-sample t test and Mann-Whitney U test indicated no significant difference between the suggested method and the gold standard in group-level comparison. The Wilcoxon signed-rank test revealed significant differences between our new approach and the gold standard in personal-level comparison. All points in Bland-Altman plots fell between the upper and lower limits of agreement. The inter-ICCs of ultrasonographers, our proposed algorithm, and its opponents were graded good reliability, good or moderate reliability, and moderate or poor reliability, respectively. Our proposed approach surpasses the competition and is as reliable as manual measurement.
Collapse
Affiliation(s)
- Yulin Peng
- Department of Ultrasonography, Hunan Provincial Maternal and Child Health Care Hospital, No. 53 Xiangchun Road, Changsha, 410008, Hunan, China
- NHC Key Laboratory of Birth Defect for Research and Prevention, Hunan Provincial Maternal and Child Health Care Hospital, Changsha, 410133, Hunan, China
- Department of Ultrasonography, Second Xiangya Hospital of Central South University, No. 139 Renmin Middle Road, Changsha, 410028, Hunan, China
| | - Yingchun Luo
- Department of Ultrasonography, Hunan Provincial Maternal and Child Health Care Hospital, No. 53 Xiangchun Road, Changsha, 410008, Hunan, China
- NHC Key Laboratory of Birth Defect for Research and Prevention, Hunan Provincial Maternal and Child Health Care Hospital, Changsha, 410133, Hunan, China
| | - Junyi Yan
- Clinical Laboratory, Hunan Provincial Maternal and Child Health Care Hospital, No. 53 Xiangchun Road, Changsha, 410008, Hunan, China.
| | - Wenjuan Li
- Department of Ultrasonography, Hunan Provincial Maternal and Child Health Care Hospital, No. 53 Xiangchun Road, Changsha, 410008, Hunan, China
| | - Yimin Liao
- Department of Ultrasonography, Hunan Provincial Maternal and Child Health Care Hospital, No. 53 Xiangchun Road, Changsha, 410008, Hunan, China
| | - Lingyu Yan
- School of Computer Science, Hubei University of Technology, No. 28 Nanli Road, Wuhan, 430068, Hubei, China
| | - Hefei Ling
- School of Computer Science and Technology, Huazhong University of Science and Technology, No. 1037 Luoyu Road, Wuhan, 430074, China
| | - Can Long
- Department of Ultrasonography, Hunan Provincial Maternal and Child Health Care Hospital, No. 53 Xiangchun Road, Changsha, 410008, Hunan, China
| |
Collapse
|
3
|
Ramirez Zegarra R, Ghi T. Use of artificial intelligence and deep learning in fetal ultrasound imaging. ULTRASOUND IN OBSTETRICS & GYNECOLOGY : THE OFFICIAL JOURNAL OF THE INTERNATIONAL SOCIETY OF ULTRASOUND IN OBSTETRICS AND GYNECOLOGY 2023; 62:185-194. [PMID: 36436205 DOI: 10.1002/uog.26130] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 11/06/2022] [Accepted: 11/21/2022] [Indexed: 06/16/2023]
Abstract
Deep learning is considered the leading artificial intelligence tool in image analysis in general. Deep-learning algorithms excel at image recognition, which makes them valuable in medical imaging. Obstetric ultrasound has become the gold standard imaging modality for detection and diagnosis of fetal malformations. However, ultrasound relies heavily on the operator's experience, making it unreliable in inexperienced hands. Several studies have proposed the use of deep-learning models as a tool to support sonographers, in an attempt to overcome these problems inherent to ultrasound. Deep learning has many clinical applications in the field of fetal imaging, including identification of normal and abnormal fetal anatomy and measurement of fetal biometry. In this Review, we provide a comprehensive explanation of the fundamentals of deep learning in fetal imaging, with particular focus on its clinical applicability. © 2022 International Society of Ultrasound in Obstetrics and Gynecology.
Collapse
Affiliation(s)
- R Ramirez Zegarra
- Department of Medicine and Surgery, Obstetrics and Gynecology Unit, University of Parma, Parma, Italy
| | - T Ghi
- Department of Medicine and Surgery, Obstetrics and Gynecology Unit, University of Parma, Parma, Italy
| |
Collapse
|
4
|
Horgan R, Nehme L, Abuhamad A. Artificial intelligence in obstetric ultrasound: A scoping review. Prenat Diagn 2023; 43:1176-1219. [PMID: 37503802 DOI: 10.1002/pd.6411] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 06/05/2023] [Accepted: 07/17/2023] [Indexed: 07/29/2023]
Abstract
The objective is to summarize the current use of artificial intelligence (AI) in obstetric ultrasound. PubMed, Cochrane Library, and ClinicalTrials.gov databases were searched using the following keywords "neural networks", OR "artificial intelligence", OR "machine learning", OR "deep learning", AND "obstetrics", OR "obstetrical", OR "fetus", OR "foetus", OR "fetal", OR "foetal", OR "pregnancy", or "pregnant", AND "ultrasound" from inception through May 2022. The search was limited to the English language. Studies were eligible for inclusion if they described the use of AI in obstetric ultrasound. Obstetric ultrasound was defined as the process of obtaining ultrasound images of a fetus, amniotic fluid, or placenta. AI was defined as the use of neural networks, machine learning, or deep learning methods. The authors' search identified a total of 127 papers that fulfilled our inclusion criteria. The current uses of AI in obstetric ultrasound include first trimester pregnancy ultrasound, assessment of placenta, fetal biometry, fetal echocardiography, fetal neurosonography, assessment of fetal anatomy, and other uses including assessment of fetal lung maturity and screening for risk of adverse pregnancy outcomes. AI holds the potential to improve the ultrasound efficiency, pregnancy outcomes in low resource settings, detection of congenital malformations and prediction of adverse pregnancy outcomes.
Collapse
Affiliation(s)
- Rebecca Horgan
- Division of Maternal Fetal Medicine, Department of Obstetrics & Gynecology, Eastern Virginia Medical School, Norfolk, Virginia, USA
| | - Lea Nehme
- Division of Maternal Fetal Medicine, Department of Obstetrics & Gynecology, Eastern Virginia Medical School, Norfolk, Virginia, USA
| | - Alfred Abuhamad
- Division of Maternal Fetal Medicine, Department of Obstetrics & Gynecology, Eastern Virginia Medical School, Norfolk, Virginia, USA
| |
Collapse
|
5
|
Kim HY, Cho GJ, Kwon HS. Applications of artificial intelligence in obstetrics. Ultrasonography 2023; 42:2-9. [PMID: 36588179 PMCID: PMC9816710 DOI: 10.14366/usg.22063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Revised: 06/09/2022] [Accepted: 06/20/2022] [Indexed: 01/13/2023] Open
Abstract
Artificial intelligence, which has been applied as an innovative technology in multiple fields of healthcare, analyzes large amounts of data to assist in disease prediction, prevention, and diagnosis, as well as in patient monitoring. In obstetrics, artificial intelligence has been actively applied and integrated into our daily medical practice. This review provides an overview of artificial intelligence systems currently used for obstetric diagnostic purposes, such as fetal cardiotocography, ultrasonography, and magnetic resonance imaging, and demonstrates how these methods have been developed and clinically applied.
Collapse
Affiliation(s)
- Ho Yeon Kim
- Department of Obstetrics and Gynecology, Korea University College of Medicine, Seoul, Korea
| | - Geum Joon Cho
- Department of Obstetrics and Gynecology, Korea University College of Medicine, Seoul, Korea
| | - Han Sung Kwon
- Division of Maternal and Fetal Medicine, Department of Obstetrics and Gynecology, Research Institute of Medical Science, Konkuk University School of Medicine, Seoul, Korea
| |
Collapse
|
6
|
Walker MC, Willner I, Miguel OX, Murphy MSQ, El-Chaâr D, Moretti F, Dingwall Harvey ALJ, Rennicks White R, Muldoon KA, Carrington AM, Hawken S, Aviv RI. Using deep-learning in fetal ultrasound analysis for diagnosis of cystic hygroma in the first trimester. PLoS One 2022; 17:e0269323. [PMID: 35731736 PMCID: PMC9216531 DOI: 10.1371/journal.pone.0269323] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2021] [Accepted: 05/19/2022] [Indexed: 11/30/2022] Open
Abstract
Objective To develop and internally validate a deep-learning algorithm from fetal ultrasound images for the diagnosis of cystic hygromas in the first trimester. Methods All first trimester ultrasound scans with a diagnosis of a cystic hygroma between 11 and 14 weeks gestation at our tertiary care centre in Ontario, Canada were studied. Ultrasound scans with normal nuchal translucency were used as controls. The dataset was partitioned with 75% of images used for model training and 25% used for model validation. Images were analyzed using a DenseNet model and the accuracy of the trained model to correctly identify cases of cystic hygroma was assessed by calculating sensitivity, specificity, and the area under the receiver-operating characteristic (ROC) curve. Gradient class activation heat maps (Grad-CAM) were generated to assess model interpretability. Results The dataset included 289 sagittal fetal ultrasound images;129 cystic hygroma cases and 160 normal NT controls. Overall model accuracy was 93% (95% CI: 88–98%), sensitivity 92% (95% CI: 79–100%), specificity 94% (95% CI: 91–96%), and the area under the ROC curve 0.94 (95% CI: 0.89–1.0). Grad-CAM heat maps demonstrated that the model predictions were driven primarily by the fetal posterior cervical area. Conclusions Our findings demonstrate that deep-learning algorithms can achieve high accuracy in diagnostic interpretation of cystic hygroma in the first trimester, validated against expert clinical assessment.
Collapse
Affiliation(s)
- Mark C. Walker
- Department of Obstetrics and Gynecology, University of Ottawa, Ottawa, Canada
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada
- International and Global Health Office, University of Ottawa, Ottawa, Canada
- School of Epidemiology and Public Health, University of Ottawa, Ottawa, Canada
- Department of Obstetrics, Gynecology & Newborn Care, The Ottawa Hospital, Ottawa, Canada
- BORN Ontario, Children’s Hospital of Eastern Ontario Research Institute, Ottawa, Canada
- * E-mail:
| | - Inbal Willner
- Department of Obstetrics and Gynecology, University of Ottawa, Ottawa, Canada
- Department of Obstetrics, Gynecology & Newborn Care, The Ottawa Hospital, Ottawa, Canada
| | - Olivier X. Miguel
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada
| | - Malia S. Q. Murphy
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada
| | - Darine El-Chaâr
- Department of Obstetrics and Gynecology, University of Ottawa, Ottawa, Canada
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada
- School of Epidemiology and Public Health, University of Ottawa, Ottawa, Canada
- Department of Obstetrics, Gynecology & Newborn Care, The Ottawa Hospital, Ottawa, Canada
| | - Felipe Moretti
- Department of Obstetrics and Gynecology, University of Ottawa, Ottawa, Canada
- Department of Obstetrics, Gynecology & Newborn Care, The Ottawa Hospital, Ottawa, Canada
| | | | - Ruth Rennicks White
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada
- Department of Obstetrics, Gynecology & Newborn Care, The Ottawa Hospital, Ottawa, Canada
| | - Katherine A. Muldoon
- Department of Obstetrics and Gynecology, University of Ottawa, Ottawa, Canada
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada
| | - André M. Carrington
- Department of Systems Design Engineering, University of Waterloo, Waterloo, Canada
- Department of Radiology and Medical Imaging, University of Ottawa, Ottawa, Canada
| | - Steven Hawken
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada
- School of Epidemiology and Public Health, University of Ottawa, Ottawa, Canada
| | - Richard I. Aviv
- Department of Radiology and Medical Imaging, University of Ottawa, Ottawa, Canada
- Department of Radiology and Medical Imaging, The Ottawa Hospital, Ottawa, Canada
- Neuroscience Program, Ottawa Hospital Research Institute, Ottawa, Canada
| |
Collapse
|
7
|
Meshaka R, Gaunt T, Shelmerdine SC. Artificial intelligence applied to fetal MRI: A scoping review of current research. Br J Radiol 2022:20211205. [PMID: 35286139 DOI: 10.1259/bjr.20211205] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022] Open
Abstract
Artificial intelligence (AI) is defined as the development of computer systems to perform tasks normally requiring human intelligence. A subset of AI, known as machine learning (ML), takes this further by drawing inferences from patterns in data to 'learn' and 'adapt' without explicit instructions meaning that computer systems can 'evolve' and hopefully improve without necessarily requiring external human influences. The potential for this novel technology has resulted in great interest from the medical community regarding how it can be applied in healthcare. Within radiology, the focus has mostly been for applications in oncological imaging, although new roles in other subspecialty fields are slowly emerging.In this scoping review, we performed a literature search of the current state-of-the-art and emerging trends for the use of artificial intelligence as applied to fetal magnetic resonance imaging (MRI). Our search yielded several publications covering AI tools for anatomical organ segmentation, improved imaging sequences and aiding in diagnostic applications such as automated biometric fetal measurements and the detection of congenital and acquired abnormalities. We highlight our own perceived gaps in this literature and suggest future avenues for further research. It is our hope that the information presented highlights the varied ways and potential that novel digital technology could make an impact to future clinical practice with regards to fetal MRI.
Collapse
Affiliation(s)
- Riwa Meshaka
- Department of Clinical Radiology, Great Ormond Street Hospital for Children NHS Foundation Trust, Great Ormond Street, London, UK
| | - Trevor Gaunt
- Department of Radiology, University College London Hospitals NHS Foundation Trust, London, UK
| | - Susan C Shelmerdine
- Department of Clinical Radiology, Great Ormond Street Hospital for Children NHS Foundation Trust, Great Ormond Street, London, UK.,UCL Great Ormond Street Institute of Child Health, Great Ormond Street Hospital for Children, London, UK.,NIHR Great Ormond Street Hospital Biomedical Research Centre, 30 Guilford Street, Bloomsbury, London, UK.,Department of Radiology, St. George's Hospital, Blackshaw Road, London, UK
| |
Collapse
|
8
|
He F, Wang Y, Xiu Y, Zhang Y, Chen L. Artificial Intelligence in Prenatal Ultrasound Diagnosis. Front Med (Lausanne) 2021; 8:729978. [PMID: 34977053 PMCID: PMC8716504 DOI: 10.3389/fmed.2021.729978] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Accepted: 11/29/2021] [Indexed: 12/12/2022] Open
Abstract
The application of artificial intelligence (AI) technology to medical imaging has resulted in great breakthroughs. Given the unique position of ultrasound (US) in prenatal screening, the research on AI in prenatal US has practical significance with its application to prenatal US diagnosis improving work efficiency, providing quantitative assessments, standardizing measurements, improving diagnostic accuracy, and automating image quality control. This review provides an overview of recent studies that have applied AI technology to prenatal US diagnosis and explains the challenges encountered in these applications.
Collapse
Affiliation(s)
| | | | | | | | - Lizhu Chen
- Department of Ultrasound, Shengjing Hospital of China Medical University, Shenyang, China
| |
Collapse
|
9
|
Chen Z, Liu Z, Du M, Wang Z. Artificial Intelligence in Obstetric Ultrasound: An Update and Future Applications. Front Med (Lausanne) 2021; 8:733468. [PMID: 34513890 PMCID: PMC8429607 DOI: 10.3389/fmed.2021.733468] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 08/04/2021] [Indexed: 01/04/2023] Open
Abstract
Artificial intelligence (AI) can support clinical decisions and provide quality assurance for images. Although ultrasonography is commonly used in the field of obstetrics and gynecology, the use of AI is still in a stage of infancy. Nevertheless, in repetitive ultrasound examinations, such as those involving automatic positioning and identification of fetal structures, prediction of gestational age (GA), and real-time image quality assurance, AI has great potential. To realize its application, it is necessary to promote interdisciplinary communication between AI developers and sonographers. In this review, we outlined the benefits of AI technology in obstetric ultrasound diagnosis by optimizing image acquisition, quantification, segmentation, and location identification, which can be helpful for obstetric ultrasound diagnosis in different periods of pregnancy.
Collapse
Affiliation(s)
- Zhiyi Chen
- The First Affiliated Hospital, Medical Imaging Centre, Hengyang Medical School, University of South China, Hengyang, China.,Institute of Medical Imaging, University of South China, Hengyang, China
| | - Zhenyu Liu
- The First Affiliated Hospital, Medical Imaging Centre, Hengyang Medical School, University of South China, Hengyang, China
| | - Meng Du
- Institute of Medical Imaging, University of South China, Hengyang, China
| | - Ziyao Wang
- The First Affiliated Hospital, Medical Imaging Centre, Hengyang Medical School, University of South China, Hengyang, China
| |
Collapse
|
10
|
Yang X, Dou H, Huang R, Xue W, Huang Y, Qian J, Zhang Y, Luo H, Guo H, Wang T, Xiong Y, Ni D. Agent With Warm Start and Adaptive Dynamic Termination for Plane Localization in 3D Ultrasound. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1950-1961. [PMID: 33784618 DOI: 10.1109/tmi.2021.3069663] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Accurate standard plane (SP) localization is the fundamental step for prenatal ultrasound (US) diagnosis. Typically, dozens of US SPs are collected to determine the clinical diagnosis. 2D US has to perform scanning for each SP, which is time-consuming and operator-dependent. While 3D US containing multiple SPs in one shot has the inherent advantages of less user-dependency and more efficiency. Automatically locating SP in 3D US is very challenging due to the huge search space and large fetal posture variations. Our previous study proposed a deep reinforcement learning (RL) framework with an alignment module and active termination to localize SPs in 3D US automatically. However, termination of agent search in RL is important and affects the practical deployment. In this study, we enhance our previous RL framework with a newly designed adaptive dynamic termination to enable an early stop for the agent searching, saving at most 67% inference time, thus boosting the accuracy and efficiency of the RL framework at the same time. Besides, we validate the effectiveness and generalizability of our algorithm extensively on our in-house multi-organ datasets containing 433 fetal brain volumes, 519 fetal abdomen volumes, and 683 uterus volumes. Our approach achieves localization error of 2.52mm/10.26° , 2.48mm/10.39° , 2.02mm/10.48° , 2.00mm/14.57° , 2.61mm/9.71° , 3.09mm/9.58° , 1.49mm/7.54° for the transcerebellar, transventricular, transthalamic planes in fetal brain, abdominal plane in fetal abdomen, and mid-sagittal, transverse and coronal planes in uterus, respectively. Experimental results show that our method is general and has the potential to improve the efficiency and standardization of US scanning.
Collapse
|
11
|
Gao Y, Zhu Y, Liu B, Hu Y, Yu G, Guo Y. Automated Recognition of Ultrasound Cardiac Views Based on Deep Learning with Graph Constraint. Diagnostics (Basel) 2021; 11:diagnostics11071177. [PMID: 34209538 PMCID: PMC8303427 DOI: 10.3390/diagnostics11071177] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2021] [Revised: 06/23/2021] [Accepted: 06/24/2021] [Indexed: 11/16/2022] Open
Abstract
In transthoracic echocardiographic (TTE) examination, it is essential to identify the cardiac views accurately. Computer-aided recognition is expected to improve the accuracy of cardiac views of the TTE examination, particularly when obtained by non-trained providers. A new method for automatic recognition of cardiac views is proposed consisting of three processes. First, a spatial transform network is performed to learn cardiac shape changes during a cardiac cycle, which reduces intra-class variability. Second, a channel attention mechanism is introduced to adaptively recalibrate channel-wise feature responses. Finally, the structured signals by the similarities among cardiac views are transformed into the graph-based image embedding, which acts as unsupervised regularization constraints to improve the generalization accuracy. The proposed method is trained and tested in 171792 cardiac images from 584 subjects. The overall accuracy of the proposed method on cardiac image classification is 99.10%, and the mean AUC is 99.36%, better than known methods. Moreover, the overall accuracy is 97.73%, and the mean AUC is 98.59% on an independent test set with 37,883 images from 100 subjects. The proposed automated recognition model achieved comparable accuracy with true cardiac views, and thus can be applied clinically to help find standard cardiac views.
Collapse
Affiliation(s)
- Yanhua Gao
- Department of Medical Imaging, The First Affiliated Hospital of Xi’an Jiaotong University, #277 West Yanta Road, Xi’an 710061, China;
- Department of Ultrasound, Shaanxi Provincial People’s Hospital, #256 West Youyi Road, Xi’an 710068, China; (Y.Z.); (B.L.)
| | - Yuan Zhu
- Department of Ultrasound, Shaanxi Provincial People’s Hospital, #256 West Youyi Road, Xi’an 710068, China; (Y.Z.); (B.L.)
| | - Bo Liu
- Department of Ultrasound, Shaanxi Provincial People’s Hospital, #256 West Youyi Road, Xi’an 710068, China; (Y.Z.); (B.L.)
| | - Yue Hu
- Department of Biomedical Engineering, School of Basic Medical Science, Central South University, #172 Tongzipo Road, Changsha 410013, China;
| | - Gang Yu
- Department of Biomedical Engineering, School of Basic Medical Science, Central South University, #172 Tongzipo Road, Changsha 410013, China;
- Correspondence: (G.Y.); (Y.G.); Tel./Fax: +0731-8265-0001 (G.Y.); +029-8532-3112 (Y.G.)
| | - Youmin Guo
- Department of Medical Imaging, The First Affiliated Hospital of Xi’an Jiaotong University, #277 West Yanta Road, Xi’an 710061, China;
- Correspondence: (G.Y.); (Y.G.); Tel./Fax: +0731-8265-0001 (G.Y.); +029-8532-3112 (Y.G.)
| |
Collapse
|
12
|
Automatic Fetal Middle Sagittal Plane Detection in Ultrasound Using Generative Adversarial Network. Diagnostics (Basel) 2020; 11:diagnostics11010021. [PMID: 33374307 PMCID: PMC7824131 DOI: 10.3390/diagnostics11010021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2020] [Revised: 12/18/2020] [Accepted: 12/21/2020] [Indexed: 11/22/2022] Open
Abstract
Background and Objective: In the first trimester of pregnancy, fetal growth, and abnormalities can be assessed using the exact middle sagittal plane (MSP) of the fetus. However, the ultrasound (US) image quality and operator experience affect the accuracy. We present an automatic system that enables precise fetal MSP detection from three-dimensional (3D) US and provides an evaluation of its performance using a generative adversarial network (GAN) framework. Method: The neural network is designed as a filter and generates masks to obtain the MSP, learning the features and MSP location in 3D space. Using the proposed image analysis system, a seed point was obtained from 218 first-trimester fetal 3D US volumes using deep learning and the MSP was automatically extracted. Results: The experimental results reveal the feasibility and excellent performance of the proposed approach between the automatically and manually detected MSPs. There was no significant difference between the semi-automatic and automatic systems. Further, the inference time in the automatic system was up to two times faster than the semi-automatic approach. Conclusion: The proposed system offers precise fetal MSP measurements. Therefore, this automatic fetal MSP detection and measurement approach is anticipated to be useful clinically. The proposed system can also be applied to other relevant clinical fields in the future.
Collapse
|
13
|
|
14
|
Ryou H, Yaqub M, Cavallaro A, Papageorghiou AT, Alison Noble J. Automated 3D ultrasound image analysis for first trimester assessment of fetal health. Phys Med Biol 2019; 64:185010. [PMID: 31408850 DOI: 10.1088/1361-6560/ab3ad1] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
The first trimester fetal ultrasound scan is important to confirm fetal viability, to estimate the gestational age of the fetus, and to detect fetal anomalies early in pregnancy. First trimester ultrasound images have a different appearance than for the second trimester scan, reflecting the different stage of fetal development. There is limited literature on automation of image-based assessment for this earlier trimester, and most of the literature is focused on one specific fetal anatomy. In this paper, we consider automation to support first trimester fetal assessment of multiple fetal anatomies including both visualization and the measurements from a single 3D ultrasound scan. We present a deep learning and image processing solution (i) to perform semantic segmentation of the whole fetus, (ii) to estimate plane orientation for standard biometry views, (iii) to localize and automatically estimate biometry, and (iv) to detect fetal limbs from a 3D first trimester volume. Computational analysis methods were built using a real-world dataset (n = 44 volumes). An evaluation on a further independent clinical dataset (n = 21 volumes) showed that the automated methods approached human expert assessment of a 3D volume.
Collapse
Affiliation(s)
- Hosuk Ryou
- Department of Engineering Science, Institute of Biomedical Engineering, University of Oxford, Oxford, United Kingdom. Author to whom correspondence should be addressed
| | | | | | | | | |
Collapse
|
15
|
Tobore I, Li J, Yuhang L, Al-Handarish Y, Kandwal A, Nie Z, Wang L. Deep Learning Intervention for Health Care Challenges: Some Biomedical Domain Considerations. JMIR Mhealth Uhealth 2019; 7:e11966. [PMID: 31376272 PMCID: PMC6696854 DOI: 10.2196/11966] [Citation(s) in RCA: 58] [Impact Index Per Article: 11.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2018] [Revised: 04/14/2019] [Accepted: 06/12/2019] [Indexed: 01/10/2023] Open
Abstract
The use of deep learning (DL) for the analysis and diagnosis of biomedical and health care problems has received unprecedented attention in the last decade. The technique has recorded a number of achievements for unearthing meaningful features and accomplishing tasks that were hitherto difficult to solve by other methods and human experts. Currently, biological and medical devices, treatment, and applications are capable of generating large volumes of data in the form of images, sounds, text, graphs, and signals creating the concept of big data. The innovation of DL is a developing trend in the wake of big data for data representation and analysis. DL is a type of machine learning algorithm that has deeper (or more) hidden layers of similar function cascaded into the network and has the capability to make meaning from medical big data. Current transformation drivers to achieve personalized health care delivery will be possible with the use of mobile health (mHealth). DL can provide the analysis for the deluge of data generated from mHealth apps. This paper reviews the fundamentals of DL methods and presents a general view of the trends in DL by capturing literature from PubMed and the Institute of Electrical and Electronics Engineers database publications that implement different variants of DL. We highlight the implementation of DL in health care, which we categorize into biological system, electronic health record, medical image, and physiological signals. In addition, we discuss some inherent challenges of DL affecting biomedical and health domain, as well as prospective research directions that focus on improving health management by promoting the application of physiological signals and modern internet technology.
Collapse
Affiliation(s)
- Igbe Tobore
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China.,Graduate University, Chinese Academy of Sciences, Beijing, China
| | - Jingzhen Li
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Liu Yuhang
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yousef Al-Handarish
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Abhishek Kandwal
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Zedong Nie
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Lei Wang
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
16
|
van den Heuvel TLA, Petros H, Santini S, de Korte CL, van Ginneken B. Automated Fetal Head Detection and Circumference Estimation from Free-Hand Ultrasound Sweeps Using Deep Learning in Resource-Limited Countries. ULTRASOUND IN MEDICINE & BIOLOGY 2019; 45:773-785. [PMID: 30573305 DOI: 10.1016/j.ultrasmedbio.2018.09.015] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/27/2018] [Revised: 09/05/2018] [Accepted: 09/14/2018] [Indexed: 06/09/2023]
Abstract
Ultrasound imaging remains out of reach for most pregnant women in developing countries because it requires a trained sonographer to acquire and interpret the images. We address this problem by presenting a system that can automatically estimate the fetal head circumference (HC) from data obtained with use of the obstetric sweep protocol (OSP). The OSP consists of multiple pre-defined sweeps with the ultrasound transducer over the abdomen of the pregnant woman. The OSP can be taught within a day to any health care worker without prior knowledge of ultrasound. An experienced sonographer acquired both the standard plane-to obtain the reference HC-and the OSP from 183 pregnant women in St. Luke's Hospital, Wolisso, Ethiopia. The OSP data, which will most likely not contain the standard plane, was used to automatically estimate HC using two fully convolutional neural networks. First, a VGG-Net-inspired network was trained to automatically detect the frames that contained the fetal head. Second, a U-net-inspired network was trained to automatically measure the HC for all frames in which the first network detected a fetal head. The HC was estimated from these frame measurements, and the curve of Hadlock was used to determine gestational age (GA). The results indicated that most automatically estimated GAs fell within the P2.5-P97.5 interval of the Hadlock curve compared with the GAs obtained from the reference HC, so it is possible to automatically estimate GA from OSP data. Our method therefore has potential application for providing maternal care in resource-constrained countries.
Collapse
Affiliation(s)
- Thomas L A van den Heuvel
- Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, The Netherlands; Medical Ultrasound Imaging Center, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, The Netherlands.
| | - Hezkiel Petros
- St. Luke's Catholic Hospital and College of Nursing and Midwifery, Wolisso, Ethiopia
| | - Stefano Santini
- St. Luke's Catholic Hospital and College of Nursing and Midwifery, Wolisso, Ethiopia
| | - Chris L de Korte
- St. Luke's Catholic Hospital and College of Nursing and Midwifery, Wolisso, Ethiopia; Physics of Fluids Group, MIRA, University of Twente, The Netherlands
| | - Bram van Ginneken
- Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, The Netherlands; Fraunhofer MEVIS, Bremen, Germany
| |
Collapse
|
17
|
Automatic measurement of fetal Nuchal translucency from three-dimensional ultrasound data. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2017:3417-3420. [PMID: 29060631 DOI: 10.1109/embc.2017.8037590] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
The Nuchal translucency (NT), which is the collection of fluid at the back of the fetal neck, is related to chromosomal defects and early cardiac failure in first trimester of pregnancy. In clinic, the thickness of NT is used as an important marker in prenatal screening, and is manually measured by sonographers in the mid-sagittal plane. In this paper, an automatic method based on dynamic programming is proposed to detect the thickness and area of NT in the mid-sagittal plane. Furthermore, the volume of NT in the whole three-dimensional ultrasound data is also measured. A novel cost function for dynamic programming is proposed and results in higher accuracy of NT border detection. As the nuchal translucency is the collection fluid part, higher dimensional markers of NT possess more potential to represent the amount of the fluid.
Collapse
|