1
|
Yasrab R, Zhao H, Fu Z, Drukker L, Papageorghiou AT, Noble JA. Automating the Human Action of First-Trimester Biometry Measurement from Real-World Freehand Ultrasound. ULTRASOUND IN MEDICINE & BIOLOGY 2024; 50:805-816. [PMID: 38467521 DOI: 10.1016/j.ultrasmedbio.2024.01.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Revised: 01/10/2024] [Accepted: 01/25/2024] [Indexed: 03/13/2024]
Abstract
OBJECTIVE Automated medical image analysis solutions should closely mimic complete human actions to be useful in clinical practice. However, more often an automated image analysis solution represents only part of a human task, which restricts its practical utility. In the case of ultrasound-based fetal biometry, an automated solution should ideally recognize key fetal structures in freehand video guidance, select a standard plane from a video stream and perform biometry. A complete automated solution should automate all three subactions. METHODS In this article, we consider how to automate the complete human action of first-trimester biometry measurement from real-world freehand ultrasound. In the proposed hybrid convolutional neural network (CNN) architecture design, a classification regression-based guidance model detects and tracks fetal anatomical structures (using visual cues) in the ultrasound video. Several high-quality standard planes that contain the mid-sagittal view of the fetus are sampled at multiple time stamps (using a custom-designed confident-frame detector) based on the estimated probability values associated with predicted anatomical structures that define the biometry plane. Automated semantic segmentation is performed on the selected frames to extract fetal anatomical landmarks. A crown-rump length (CRL) estimate is calculated as the mean CRL from these multiple frames. RESULTS Our fully automated method has a high correlation with clinical expert CRL measurement (Pearson's p = 0.92, R-squared [R2] = 0.84) and a low mean absolute error of 0.834 (weeks) for fetal age estimation on a test data set of 42 videos. CONCLUSION A novel algorithm for standard plane detection employs a quality detection mechanism defined by clinical standards, ensuring precise biometric measurements.
Collapse
Affiliation(s)
- Robail Yasrab
- Department of Engineering Science, University of Oxford, Oxford, UK; School of Clinical Medicine, University of Cambridge, Cambridge, UK.
| | - He Zhao
- Department of Engineering Science, University of Oxford, Oxford, UK
| | - Zeyu Fu
- Department of Engineering Science, University of Oxford, Oxford, UK
| | - Lior Drukker
- Department of Engineering Science, University of Oxford, Oxford, UK; Sackler Faculty of Medicine, Rabin Medical Center, Tel-Aviv University, Tel-Aviv, Israel
| | - Aris T Papageorghiou
- Nuffield Department of Women's Reproductive Health, University of Oxford, Oxford, UK
| | - J Alison Noble
- Department of Engineering Science, University of Oxford, Oxford, UK
| |
Collapse
|
2
|
Liu Z, Yu W, Wu X, Yang T, Lyu G, Liu P, Xue H. Detection of fetal facial anatomy in standard ultrasonographic sections based on real-time target detection network. Int J Gynaecol Obstet 2024; 165:916-928. [PMID: 37807664 DOI: 10.1002/ijgo.15145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Revised: 09/02/2023] [Accepted: 09/05/2023] [Indexed: 10/10/2023]
Abstract
At present, prenatal ultrasound is one of the important means for screening fetal malformations. In the process of prenatal ultrasound diagnosis, the accurate recognition of fetal facial ultrasound standard plane is crucial for facial malformation detection and disease screening. Due to the dense distribution of fetal facial images, no obvious structure contour boundary, small structure area, and large area overlap in the middle of the structure detection frame, this paper regards the fetal facial standard plane and its structure recognition as a universal target detection task for the first time, and applies real-time YOLO v5s to the fetal facial ultrasound standard plane structure detection and classification task. First, we detect the structure of a single slice, and take the structure of a slice class as the recognition object. Second, we carry out structural detection experiments on three standard planes; then, on the basis of the previous stage, the images of all parts included in the ultrasound examination of multiple fetuses were collected. In the single-class structure detection experiment and the structure detection and classification experiment of three types of standard planes, the overall recognition effect of Precision and Recall index data is better, with Precision being 98.3% and 98.1%, and Recall being 99.3% and 98.2%, respectively. The experimental results show that the model has the ability to identify fetal facial anatomy and standard sections in different data, which can help the physician to automatically and quickly screen out the standard sections of each fetal facial ultrasound.
Collapse
Affiliation(s)
- Zhonghua Liu
- Department of Ultrasound, Quanzhou First Hospital Affiliated to Fujian Medical University, Quanzhou, China
| | - Weifeng Yu
- Department of Ultrasound, Quanzhou First Hospital Affiliated to Fujian Medical University, Quanzhou, China
| | - Xiuming Wu
- Department of Ultrasound, Quanzhou First Hospital Affiliated to Fujian Medical University, Quanzhou, China
| | - Tong Yang
- School of Medicine, Huaqiao University, Quanzhou, Fujian, China
| | - Guorong Lyu
- Department of Ultrasound, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian, China
- Collaborative Innovation Center for Maternal and Infant Health Service Application Technology, Quanzhou Medical College, Quanzhou, China
| | - Peizhong Liu
- School of Medicine, Huaqiao University, Quanzhou, Fujian, China
- College of Engineering, Huaqiao University, Quanzhou, Fujian, China
| | - Hao Xue
- College of Engineering, Huaqiao University, Quanzhou, Fujian, China
| |
Collapse
|
3
|
Pu B, Li K, Chen J, Lu Y, Zeng Q, Yang J, Li S. HFSCCD: A Hybrid Neural Network for Fetal Standard Cardiac Cycle Detection in Ultrasound Videos. IEEE J Biomed Health Inform 2024; 28:2943-2954. [PMID: 38412077 DOI: 10.1109/jbhi.2024.3370507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/29/2024]
Abstract
In the fetal cardiac ultrasound examination, standard cardiac cycle (SCC) recognition is the essential foundation for diagnosing congenital heart disease. Previous studies have mostly focused on the detection of adult CCs, which may not be applicable to the fetus. In clinical practice, localization of SCCs needs to recognize end-systole (ES) and end-diastole (ED) frames accurately, ensuring that every frame in the cycle is a standard view. Most existing methods are not based on the detection of key anatomical structures, which may not recognize irrelevant views and background frames, results containing non-standard frames, or even it does not work in clinical practice. We propose an end-to-end hybrid neural network based on an object detector to detect SCCs from fetal ultrasound videos efficiently, which consists of 3 modules, namely Anatomical Structure Detection (ASD), Cardiac Cycle Localization (CCL), and Standard Plane Recognition (SPR). Specifically, ASD uses an object detector to identify 9 key anatomical structures, 3 cardiac motion phases, and the corresponding confidence scores from fetal ultrasound videos. On this basis, we propose a joint probability method in the CCL to learn the cardiac motion cycle based on the 3 cardiac motion phases. In SPR, to reduce the impact of structure detection errors on the accuracy of the standard plane recognition, we use XGBoost algorithm to learn the relation knowledge of the detected anatomical structures. We evaluate our method on the test fetal ultrasound video datasets and clinical examination cases and achieve remarkable results. This study may pave the way for clinical practices.
Collapse
|
4
|
Alasmawi H, Bricker L, Yaqub M. FUSC: Fetal Ultrasound Semantic Clustering of Second-Trimester Scans Using Deep Self-Supervised Learning. ULTRASOUND IN MEDICINE & BIOLOGY 2024; 50:703-711. [PMID: 38350787 DOI: 10.1016/j.ultrasmedbio.2024.01.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Revised: 12/31/2023] [Accepted: 01/14/2024] [Indexed: 02/15/2024]
Abstract
OBJECTIVE The aim of this study was address the challenges posed by the manual labeling of fetal ultrasound images by introducing an unsupervised approach, the fetal ultrasound semantic clustering (FUSC) method. The primary objective was to automatically cluster a large volume of ultrasound images into various fetal views, reducing or eliminating the need for labor-intensive manual labeling. METHODS The FUSC method was developed by using a substantial data set comprising 88,063 images. The methodology involves an unsupervised clustering approach to categorize ultrasound images into diverse fetal views. The method's effectiveness was further evaluated on an additional, unseen data set consisting of 8187 images. The evaluation included assessment of the clustering purity, and the entire process is detailed to provide insights into the method's performance. RESULTS The FUSC method exhibited notable success, achieving >92% clustering purity on the evaluation data set of 8187 images. The results signify the feasibility of automatically clustering fetal ultrasound images without relying on manual labeling. The study showcases the potential of this approach in handling a large volume of ultrasound scans encountered in clinical practice, with implications for improving efficiency and accuracy in fetal ultrasound imaging. CONCLUSION The findings of this investigation suggest that the FUSC method holds significant promise for the field of fetal ultrasound imaging. By automating the clustering of ultrasound images, this approach has the potential to reduce the manual labeling burden, making the process more efficient. The results pave the way for advanced automated labeling solutions, contributing to the enhancement of clinical practices in fetal ultrasound imaging. Our code is available at https://github.com/BioMedIA-MBZUAI/FUSC.
Collapse
Affiliation(s)
- Hussain Alasmawi
- Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi, United Arab Emirates.
| | - Leanne Bricker
- Abu Dhabi Health Services Company (SEHA), Abu Dhabi, United Arab Emirates
| | - Mohammad Yaqub
- Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi, United Arab Emirates
| |
Collapse
|
5
|
Zhang J, Xiao S, Zhu Y, Zhang Z, Cao H, Xie M, Zhang L. Advances in the Application of Artificial Intelligence in Fetal Echocardiography. J Am Soc Echocardiogr 2024; 37:550-561. [PMID: 38199332 DOI: 10.1016/j.echo.2023.12.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 12/23/2023] [Accepted: 12/23/2023] [Indexed: 01/12/2024]
Abstract
Congenital heart disease is a severe health risk for newborns. Early detection of abnormalities in fetal cardiac structure and function during pregnancy can help patients seek timely diagnostic and therapeutic advice, and early intervention planning can significantly improve fetal survival rates. Echocardiography is one of the most accessible and widely used diagnostic tools in the diagnosis of fetal congenital heart disease. However, traditional fetal echocardiography has limitations due to fetal, maternal, and ultrasound equipment factors and is highly dependent on the skill level of the operator. Artificial intelligence (AI) technology, with its rapid development utilizing advanced computer algorithms, has great potential to empower sonographers in time-saving and accurate diagnosis and to bridge the skill gap in different regions. In recent years, AI-assisted fetal echocardiography has been successfully applied to a wide range of ultrasound diagnoses. This review systematically reviews the applications of AI in the field of fetal echocardiography over the years in terms of image processing, biometrics, and disease diagnosis and provides an outlook for future research.
Collapse
Affiliation(s)
- Junmin Zhang
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Clinical Research Center for Medical Imaging, Hubei Province, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
| | - Sushan Xiao
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Clinical Research Center for Medical Imaging, Hubei Province, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
| | - Ye Zhu
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Clinical Research Center for Medical Imaging, Hubei Province, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
| | - Zisang Zhang
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Clinical Research Center for Medical Imaging, Hubei Province, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
| | - Haiyan Cao
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Clinical Research Center for Medical Imaging, Hubei Province, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
| | - Mingxing Xie
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Clinical Research Center for Medical Imaging, Hubei Province, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
| | - Li Zhang
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Clinical Research Center for Medical Imaging, Hubei Province, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China.
| |
Collapse
|
6
|
Liu X, Zhang Y, Zhu H, Jia B, Wang J, He Y, Zhang H. Applications of artificial intelligence-powered prenatal diagnosis for congenital heart disease. Front Cardiovasc Med 2024; 11:1345761. [PMID: 38720920 PMCID: PMC11076681 DOI: 10.3389/fcvm.2024.1345761] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Accepted: 04/08/2024] [Indexed: 05/12/2024] Open
Abstract
Artificial intelligence (AI) has made significant progress in the medical field in the last decade. The AI-powered analysis methods of medical images and clinical records can now match the abilities of clinical physicians. Due to the challenges posed by the unique group of fetuses and the dynamic organ of the heart, research into the application of AI in the prenatal diagnosis of congenital heart disease (CHD) is particularly active. In this review, we discuss the clinical questions and research methods involved in using AI to address prenatal diagnosis of CHD, including imaging, genetic diagnosis, and risk prediction. Representative examples are provided for each method discussed. Finally, we discuss the current limitations of AI in prenatal diagnosis of CHD, namely Volatility, Insufficiency and Independence (VII), and propose possible solutions.
Collapse
Affiliation(s)
- Xiangyu Liu
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China
- Key Laboratory of Data Science and Intelligent Computing, International Innovation Institute, Beihang University, Hangzhou, China
| | - Yingying Zhang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China
- Key Laboratory of Data Science and Intelligent Computing, International Innovation Institute, Beihang University, Hangzhou, China
| | - Haogang Zhu
- Key Laboratory of Data Science and Intelligent Computing, International Innovation Institute, Beihang University, Hangzhou, China
- State Key Laboratory of Software Development Environment, Beihang University, Beijing, China
- School of Computer Science and Engineering, Beihang University, Beijing, China
| | - Bosen Jia
- School of Biological Sciences, Victoria University of Wellington, Wellington, New Zealand
| | - Jingyi Wang
- Echocardiography Medical Center Beijing Anzhen Hospital, Capital Medical University, Beijing, China
- Maternal-Fetal Medicine Center in Fetal Heart Disease, Beijing Anzhen Hospital, Beijing, China
| | - Yihua He
- Echocardiography Medical Center Beijing Anzhen Hospital, Capital Medical University, Beijing, China
- Maternal-Fetal Medicine Center in Fetal Heart Disease, Beijing Anzhen Hospital, Beijing, China
| | - Hongjia Zhang
- Key Laboratory of Data Science and Intelligent Computing, International Innovation Institute, Beihang University, Hangzhou, China
- Beijing Lab for Cardiovascular Precision Medicine, Beijing, China
| |
Collapse
|
7
|
Lei T, Feng JL, Lin MF, Xie BH, Zhou Q, Wang N, Zheng Q, Yang YD, Guo HM, Xie HN. Development and validation of an artificial intelligence assisted prenatal ultrasonography screening system for trainees. Int J Gynaecol Obstet 2024; 165:306-317. [PMID: 37789758 DOI: 10.1002/ijgo.15167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Revised: 09/10/2023] [Accepted: 09/16/2023] [Indexed: 10/05/2023]
Abstract
OBJECTIVE Fetal anomaly screening via ultrasonography, which involves capturing and interpreting standard views, is highly challenging for inexperienced operators. We aimed to develop and validate a prenatal-screening artificial intelligence system (PSAIS) for real-time evaluation of the quality of anatomical images, indicating existing and missing structures. METHODS Still ultrasonographic images obtained from fetuses of 18-32 weeks of gestation between 2017 and 2018 were used to develop PSAIS based on YOLOv3 with global (anatomic site) and local (structures) feature extraction that could evaluate the image quality and indicate existing and missing structures in the fetal anatomical images. The performance of the PSAIS in recognizing 19 standard views was evaluated using retrospective real-world fetal scan video validation datasets from four hospitals. We stratified sampled frames (standard, similar-to-standard, and background views at approximately 1:1:1) for experts to blindly verify the results. RESULTS The PSAIS was trained using 134 696 images and validated using 836 videos with 12 697 images. For internal and external validations, the multiclass macro-average areas under the receiver operating characteristic curve were 0.943 (95% confidence interval [CI], 0.815-1.000) and 0.958 (0.864-1.000); the micro-average areas were 0.974 (0.970-0.979) and 0.973 (0.965-0.981), respectively. For similar-to-standard views, the PSAIS accurately labeled 90.9% (90.0%-91.4%) with key structures and indicated missing structures. CONCLUSIONS An artificial intelligence system developed to assist trainees in fetal anomaly screening demonstrated high agreement with experts in standard view identification.
Collapse
Affiliation(s)
- Ting Lei
- Department of Ultrasonic Medicine, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Jie Ling Feng
- Department of Ultrasonic Medicine, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Mei Fang Lin
- Department of Ultrasonic Medicine, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Bai Hong Xie
- Guangzhou Aiyunji Information Technology Co., Ltd, Guangzhou, Guangdong, China
| | - Qian Zhou
- Clinical Trials Unit, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Nan Wang
- Guangzhou Aiyunji Information Technology Co., Ltd, Guangzhou, Guangdong, China
| | - Qiao Zheng
- Department of Ultrasonic Medicine, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Yan Dong Yang
- Department of Ultrasonic Medicine, The Sixth Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Hong Mei Guo
- Department of Ultrasonic Medicine, DongGuan City Maternal and Child Health Hospital, DongGuan, China
| | - Hong Ning Xie
- Department of Ultrasonic Medicine, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| |
Collapse
|
8
|
Deslandes A, Avery J, Chen H, Leonardi M, Condous G, Hull ML. Artificial intelligence as a teaching tool for gynaecological ultrasound: A systematic search and scoping review. Australas J Ultrasound Med 2024; 27:5-11. [PMID: 38434541 PMCID: PMC10902831 DOI: 10.1002/ajum.12368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/05/2024] Open
Abstract
Purpose The aim of this study was to investigate the current application of artificial intelligence (AI) tools in the teaching of ultrasound skills as they pertain to gynaecological ultrasound. Methods A scoping review was performed. Eight databases (MEDLINE, EMBASE, EMCARE, CINAHL, Scopus, Web of Science, IEEE Xplore and ACM digital library) were searched in December 2022 using predefined keywords. All types of publications were eligible for inclusion so long as they reported the use of an AI tool, included reference to or discussion of teaching or the improvement of ultrasound skills and pertained to gynaecological ultrasound. Conference abstracts and non-English language papers which could not be adequately translated into English were excluded. Results The initial database search returned 481 articles. After screening against our inclusion and exclusion criteria, two were deemed to meet the inclusion criteria. Neither of the articles included reported original research (one systematic review and one review article). Neither of the included articles explicitly provided details of specific tools developed for the teaching of ultrasound skills for gynaecological imaging but highlighted similar applications within the field of obstetrics which could potentially be expanded. Conclusion Artificial intelligence can potentially assist in the training of sonographers and other ultrasound operators, including in the field of gynaecological ultrasound. This scoping review revealed however that to date, no original research has been published reporting the use or development of such a tool specifically for gynaecological ultrasound.
Collapse
Affiliation(s)
- Alison Deslandes
- Robinson Research InstituteUniversity of AdelaideAdelaideSouth AustraliaAustralia
| | - Jodie Avery
- Robinson Research InstituteUniversity of AdelaideAdelaideSouth AustraliaAustralia
| | - Hsiang‐Ting Chen
- School of Computer and Mathematical SciencesUniversity of AdelaideAdelaideSouth AustraliaAustralia
| | - Mathew Leonardi
- Robinson Research InstituteUniversity of AdelaideAdelaideSouth AustraliaAustralia
- Department of Obstetrics and GynecologyMcMaster UniversityHamiltonOntarioCanada
| | - George Condous
- Robinson Research InstituteUniversity of AdelaideAdelaideSouth AustraliaAustralia
| | - M. Louise Hull
- Robinson Research InstituteUniversity of AdelaideAdelaideSouth AustraliaAustralia
| |
Collapse
|
9
|
Tang R, Li Z, Jiang L, Jiang J, Zhao B, Cui L, Zhou G, Chen X, Jiang D. Development and Clinical Application of Artificial Intelligence Assistant System for Rotator Cuff Ultrasound Scanning. ULTRASOUND IN MEDICINE & BIOLOGY 2024; 50:251-257. [PMID: 38042717 DOI: 10.1016/j.ultrasmedbio.2023.10.010] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Revised: 10/19/2023] [Accepted: 10/24/2023] [Indexed: 12/04/2023]
Abstract
OBJECTIVE We developed an intelligent assistance system for shoulder ultrasound imaging, incorporating deep-learning algorithms to facilitate standard plane recognition and automatic tissue segmentation of the rotator cuff and its surrounding structures. We evaluated the system's performance using a dedicated data set of rotator cuff ultrasound images to assess its feasibility in clinical practice. METHODS To fulfill the system's primary functions, we designed a standard plane recognition module based on the ResNet50 network and an automatic tissue segmentation module using the Mask R-CNN model. The modules were trained on carefully curated data sets. The standard plane recognition module automatically identifies a specific standard plane based on the ultrasound image characteristics. The automatic tissue segmentation module effectively delineates and segments anatomical structures within the identified standard plane. RESULTS With the use of 59,265 shoulder joint ultrasound images, the standard plane recognition model achieved an impressive recognition accuracy of 94.9% in the test set, with an average precision rate of 96.4%, recall rate of 95.4% and F1 score of 95.9%. The automatic tissue segmentation model, tested on 1886 images, exhibited a commendable average intersection over union value of 96.2%, indicating robustness and accuracy. The model achieved mean intersection over union values exceeding 90.0% for all standard planes, indicating its effectiveness in precisely delineating the anatomical structures. CONCLUSION Our shoulder joint musculoskeletal intelligence system swiftly and accurately identifies standard planes and performs automatic tissue segmentation.
Collapse
Affiliation(s)
- Rui Tang
- Department of Ultrasound, Peking University Third Hospital, Beijing, China; Peking University Health Science Center Institute of Medical Technology, Beijing, China
| | - Zhiqiang Li
- Department of Ultrasound, Peking University Third Hospital, Beijing, China
| | - Ling Jiang
- Department of Ultrasound, Peking University Third Hospital, Beijing, China
| | - Jie Jiang
- Department of Ultrasound, Peking University Third Hospital, Beijing, China
| | - Bo Zhao
- Department of Ultrasound, Peking University Third Hospital, Beijing, China
| | - Ligang Cui
- Department of Ultrasound, Peking University Third Hospital, Beijing, China.
| | - Guoyi Zhou
- Sonoscape Medical Corporation, Shenzhen, China
| | - Xin Chen
- Sonoscape Medical Corporation, Shenzhen, China
| | - Daimin Jiang
- Sonoscape Medical Corporation(Wuhan), Wuhan, China
| |
Collapse
|
10
|
Kaur I, Ahmad T. A cluster-based ensemble approach for congenital heart disease prediction. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 243:107922. [PMID: 37984098 DOI: 10.1016/j.cmpb.2023.107922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2023] [Revised: 10/24/2023] [Accepted: 11/06/2023] [Indexed: 11/22/2023]
Abstract
BACKGROUND One of the most prevalent birth disorders is congenital heart diseases (CHD). Although CHD risk factors have been the subject of numerous studies, their propensity to cause CHD has not been tested. Particularly few research has attempted to forecast CHD risk using population-based cross-sectional data, which is inherently imbalanced. OBJECTIVE The main goals of this study are to create a reliable data analysis model that can help with (i) a better understanding of congenital heart disease prediction in the presence of missing and unbalanced data and (ii) creating cohorts of expectant mothers with similar lifestyle characteristics. METHODS Clusters of patient cohorts are produced using the unsupervised data mining technique density-based spatial clustering of applications with noise (DBSCAN). For more accurate CHD prediction, a random forest model was trained using these clusters and their corresponding patterns. This study uses a dataset of 33,831 expectant mothers to make its prediction. Missing data were handled using the k-NN imputation approach, while extremely unbalanced data were balanced using SMOTE. These techniques are all data-driven and need little to no user or expert involvement. RESULTS AND CONCLUSION Using DBSCAN, three cohorts were found. The cluster information enhanced the random forest-based CHD prediction and revealed intricate factors that influence prediction accuracy. The proposed approach gave the highest results with 99 % accuracy and 0.91 AUC and performed better than the state-of-the-art methodologies. Hence, the suggested method using unsupervised learning can provide intricate information to the classifier and further enhance the performance of the classification.
Collapse
Affiliation(s)
- Ishleen Kaur
- Sri Guru Tegh Bahadur Khalsa College, University of Delhi, Delhi, India.
| | - Tanvir Ahmad
- Department of Computer Engineering, Jamia Millia Islamia, New Delhi, India
| |
Collapse
|
11
|
Zhang Y, Zhu H, Cheng J, Wang J, Gu X, Han J, Zhang Y, Zhao Y, He Y, Zhang H. Improving the Quality of Fetal Heart Ultrasound Imaging With Multihead Enhanced Self-Attention and Contrastive Learning. IEEE J Biomed Health Inform 2023; 27:5518-5529. [PMID: 37556337 DOI: 10.1109/jbhi.2023.3303573] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/11/2023]
Abstract
Fetal congenital heart disease (FCHD) is a common, serious birth defect affecting ∼1% of newborns annually. Fetal echocardiography is the most effective and important technique for prenatal FCHD diagnosis. The prerequisites for accurate ultrasound FCHD diagnosis are accurate view recognition and high-quality diagnostic view extraction. However, these manual clinical procedures have drawbacks such as, varying technical capabilities and inefficiency. Therefore, the automatic identification of high-quality multiview fetal heart scan images is highly desirable to improve prenatal diagnosis efficiency and accuracy of FCHD. Here, we present a framework for multiview fetal heart ultrasound image recognition and quality assessment that comprises two parts: a multiview classification and localization network (MCLN) and an improved contrastive learning network (ICLN). In the MCLN, a multihead enhanced self-attention mechanism is applied to construct the classification network and identify six accurate and interpretable views of the fetal heart. In the ICLN, anatomical structure standardization and image clarity are considered. With contrastive learning, the absolute loss, feature relative loss and predicted value relative loss are combined to achieve favorable quality assessment results. Experiments show that the MCLN outperforms other state-of-the-art networks by 1.52-13.61% when determining the F1 score in six standard view recognition tasks, and the ICLN is comparable to the performance of expert cardiologists in the quality assessment of fetal heart ultrasound images, reaching 97% on a test set within 2 points for the four-chamber view task. Thus, our architecture offers great potential in helping cardiologists improve quality control for fetal echocardiographic images in clinical practice.
Collapse
|
12
|
Padovani P, Singh Y, Pass RH, Vasile CM, Nield LE, Baruteau AE. E-Health: A Game Changer in Fetal and Neonatal Cardiology? J Clin Med 2023; 12:6865. [PMID: 37959330 PMCID: PMC10650296 DOI: 10.3390/jcm12216865] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 10/20/2023] [Accepted: 10/26/2023] [Indexed: 11/15/2023] Open
Abstract
Technological advancements have greatly impacted the healthcare industry, including the integration of e-health in pediatric cardiology. The use of telemedicine, mobile health applications, and electronic health records have demonstrated a significant potential to improve patient outcomes, reduce healthcare costs, and enhance the quality of care. Telemedicine provides a useful tool for remote clinics, follow-up visits, and monitoring for infants with congenital heart disease, while mobile health applications enhance patient and parents' education, medication compliance, and in some instances, remote monitoring of vital signs. Despite the benefits of e-health, there are potential limitations and challenges, such as issues related to availability, cost-effectiveness, data privacy and security, and the potential ethical, legal, and social implications of e-health interventions. In this review, we aim to highlight the current application and perspectives of e-health in the field of fetal and neonatal cardiology, including expert parents' opinions.
Collapse
Affiliation(s)
- Paul Padovani
- CHU Nantes, Department of Pediatric Cardiology and Pediatric Cardiac Surgery, FHU PRECICARE, Nantes Université, 44000 Nantes, France;
- CHU Nantes, INSERM, CIC FEA 1413, Nantes Université, 44000 Nantes, France
| | - Yogen Singh
- Division of Neonatology, Department of Pediatrics, Loma Linda University School of Medicine, Loma Linda, CA 92354, USA
- Division of Neonatal and Developmental Medicine, Department of Pediatrics, Stanford University School of Medicine, Stanford, CA 94305, USA
| | - Robert H. Pass
- Department of Pediatric Cardiology, Mount Sinai Kravis Children’s Hospital, New York, NY 10029, USA;
| | - Corina Maria Vasile
- Department of Pediatric and Adult Congenital Cardiology, University Hospital of Bordeaux, 33600 Bordeaux, France;
| | - Lynne E. Nield
- Division of Cardiology, Labatt Family Heart Centre, The Hospital for Sick Children, University of Toronto, Toronto, ON M5S 1A1, Canada
- Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada
| | - Alban-Elouen Baruteau
- CHU Nantes, Department of Pediatric Cardiology and Pediatric Cardiac Surgery, FHU PRECICARE, Nantes Université, 44000 Nantes, France;
- CHU Nantes, INSERM, CIC FEA 1413, Nantes Université, 44000 Nantes, France
- CHU Nantes, CNRS, INSERM, L’Institut du Thorax, Nantes Université, 44000 Nantes, France
- INRAE, UMR 1280, PhAN, Nantes Université, 44000 Nantes, France
| |
Collapse
|
13
|
Jost E, Kosian P, Jimenez Cruz J, Albarqouni S, Gembruch U, Strizek B, Recker F. Evolving the Era of 5D Ultrasound? A Systematic Literature Review on the Applications for Artificial Intelligence Ultrasound Imaging in Obstetrics and Gynecology. J Clin Med 2023; 12:6833. [PMID: 37959298 PMCID: PMC10649694 DOI: 10.3390/jcm12216833] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 10/17/2023] [Accepted: 10/25/2023] [Indexed: 11/15/2023] Open
Abstract
Artificial intelligence (AI) has gained prominence in medical imaging, particularly in obstetrics and gynecology (OB/GYN), where ultrasound (US) is the preferred method. It is considered cost effective and easily accessible but is time consuming and hindered by the need for specialized training. To overcome these limitations, AI models have been proposed for automated plane acquisition, anatomical measurements, and pathology detection. This study aims to overview recent literature on AI applications in OB/GYN US imaging, highlighting their benefits and limitations. For the methodology, a systematic literature search was performed in the PubMed and Cochrane Library databases. Matching abstracts were screened based on the PICOS (Participants, Intervention or Exposure, Comparison, Outcome, Study type) scheme. Articles with full text copies were distributed to the sections of OB/GYN and their research topics. As a result, this review includes 189 articles published from 1994 to 2023. Among these, 148 focus on obstetrics and 41 on gynecology. AI-assisted US applications span fetal biometry, echocardiography, or neurosonography, as well as the identification of adnexal and breast masses, and assessment of the endometrium and pelvic floor. To conclude, the applications for AI-assisted US in OB/GYN are abundant, especially in the subspecialty of obstetrics. However, while most studies focus on common application fields such as fetal biometry, this review outlines emerging and still experimental fields to promote further research.
Collapse
Affiliation(s)
- Elena Jost
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Philipp Kosian
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Jorge Jimenez Cruz
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Shadi Albarqouni
- Department of Diagnostic and Interventional Radiology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
- Helmholtz AI, Helmholtz Munich, Ingolstädter Landstraße 1, 85764 Neuherberg, Germany
| | - Ulrich Gembruch
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Brigitte Strizek
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Florian Recker
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| |
Collapse
|
14
|
Jo Y, Lee D, Baek D, Choi BK, Aryal N, Jung J, Shin YS, Hong B. Optimal view detection for ultrasound-guided supraclavicular block using deep learning approaches. Sci Rep 2023; 13:17209. [PMID: 37821574 PMCID: PMC10567700 DOI: 10.1038/s41598-023-44170-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Accepted: 10/04/2023] [Indexed: 10/13/2023] Open
Abstract
Successful ultrasound-guided supraclavicular block (SCB) requires the understanding of sonoanatomy and identification of the optimal view. Segmentation using a convolutional neural network (CNN) is limited in clearly determining the optimal view. The present study describes the development of a computer-aided diagnosis (CADx) system using a CNN that can determine the optimal view for complete SCB in real time. The aim of this study was the development of computer-aided diagnosis system that aid non-expert to determine the optimal view for complete supraclavicular block in real time. Ultrasound videos were retrospectively collected from 881 patients to develop the CADx system (600 to the training and validation set and 281 to the test set). The CADx system included classification and segmentation approaches, with Residual neural network (ResNet) and U-Net, respectively, applied as backbone networks. In the classification approach, an ablation study was performed to determine the optimal architecture and improve the performance of the model. In the segmentation approach, a cascade structure, in which U-Net is connected to ResNet, was implemented. The performance of the two approaches was evaluated based on a confusion matrix. Using the classification approach, ResNet34 and gated recurrent units with augmentation showed the highest performance, with average accuracy 0.901, precision 0.613, recall 0.757, f1-score 0.677 and AUROC 0.936. Using the segmentation approach, U-Net combined with ResNet34 and augmentation showed poorer performance than the classification approach. The CADx system described in this study showed high performance in determining the optimal view for SCB. This system could be expanded to include many anatomical regions and may have potential to aid clinicians in real-time settings.Trial registration The protocol was registered with the Clinical Trial Registry of Korea (KCT0005822, https://cris.nih.go.kr ).
Collapse
Affiliation(s)
- Yumin Jo
- Department of Anaesthesiology and Pain Medicine, College of Medicine, Chungnam National University and Hospital, 282 Munhwar-ro, Jung-gu, Daejeon, 35015, Republic of Korea
| | - Dongheon Lee
- Department of Biomedical Engineering, College of Medicine, Chungnam National University and Hospital, Daejeon, Republic of Korea
- Biomedical Research Institute, Chungnam National University Hospital, Daejeon, Republic of Korea
| | - Donghyeon Baek
- Chungnam National University College of Medicine, Daejeon, Republic of Korea
| | | | | | - Jinsik Jung
- Department of Anaesthesiology and Pain Medicine, College of Medicine, Chungnam National University and Hospital, 282 Munhwar-ro, Jung-gu, Daejeon, 35015, Republic of Korea
| | - Yong Sup Shin
- Department of Anaesthesiology and Pain Medicine, College of Medicine, Chungnam National University and Hospital, 282 Munhwar-ro, Jung-gu, Daejeon, 35015, Republic of Korea.
| | - Boohwi Hong
- Department of Anaesthesiology and Pain Medicine, College of Medicine, Chungnam National University and Hospital, 282 Munhwar-ro, Jung-gu, Daejeon, 35015, Republic of Korea.
- Biomedical Research Institute, Chungnam National University Hospital, Daejeon, Republic of Korea.
| |
Collapse
|
15
|
Wang L, Wang J, Zhu L, Fu H, Li P, Cheng G, Feng Z, Li S, Heng PA. Dual Multiscale Mean Teacher Network for Semi-Supervised Infection Segmentation in Chest CT Volume for COVID-19. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:6363-6375. [PMID: 37015538 DOI: 10.1109/tcyb.2022.3223528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Automated detecting lung infections from computed tomography (CT) data plays an important role for combating coronavirus 2019 (COVID-19). However, there are still some challenges for developing AI system: 1) most current COVID-19 infection segmentation methods mainly relied on 2-D CT images, which lack 3-D sequential constraint; 2) existing 3-D CT segmentation methods focus on single-scale representations, which do not achieve the multiple level receptive field sizes on 3-D volume; and 3) the emergent breaking out of COVID-19 makes it hard to annotate sufficient CT volumes for training deep model. To address these issues, we first build a multiple dimensional-attention convolutional neural network (MDA-CNN) to aggregate multiscale information along different dimension of input feature maps and impose supervision on multiple predictions from different convolutional neural networks (CNNs) layers. Second, we assign this MDA-CNN as a basic network into a novel dual multiscale mean teacher network (DM [Formula: see text]-Net) for semi-supervised COVID-19 lung infection segmentation on CT volumes by leveraging unlabeled data and exploring the multiscale information. Our DM [Formula: see text]-Net encourages multiple predictions at different CNN layers from the student and teacher networks to be consistent for computing a multiscale consistency loss on unlabeled data, which is then added to the supervised loss on the labeled data from multiple predictions of MDA-CNN. Third, we collect two COVID-19 segmentation datasets to evaluate our method. The experimental results show that our network consistently outperforms the compared state-of-the-art methods.
Collapse
|
16
|
Guo J, Tan G, Wu F, Wen H, Li K. Fetal Ultrasound Standard Plane Detection With Coarse-to-Fine Multi-Task Learning. IEEE J Biomed Health Inform 2023; 27:5023-5031. [PMID: 36173776 DOI: 10.1109/jbhi.2022.3209589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The ultrasound standard plane plays an important role in prenatal fetal growth parameter measurement and disease diagnosis in prenatal screening. However, obtaining standard planes in a fetal ultrasound video is not only laborious and time-consuming but also depends on the clinical experience of sonographers to a certain extent. To improve the acquisition efficiency and accuracy of the ultrasound standard plane, we propose a novel detection framework that utilizes both the coarse-to-fine detection strategy and multi-task learning mechanism for feature-fused images. First, traditional manually-designed features and deep learning-based features are fused to obtain low-level shared features, which can enhance the model's feature expression ability. Inspired by the process of human recognition, ultrasound standard plane detection is divided into a coarse process of plane type classification and a fine process of standard-or-not detection, which is implemented via an end-to-end multi-task learning network. The region-of-interest area is also recognised in our detection framework to suppress the influence of a variable maternal background. Extensive experiments are conducted on three ultrasound planes of the first-class fetal examination, i.e., the femur, thalamus, and abdomen ultrasound images. The experiment results show that our method outperforms competing methods in terms of accuracy, which demonstrates the efficacy of the proposed method and can reduce the workload of sonographers in prenatal screening.
Collapse
|
17
|
Kim Y, Hyon Y, Woo SD, Lee S, Lee SI, Ha T, Chung C. Evolution of the Stethoscope: Advances with the Adoption of Machine Learning and Development of Wearable Devices. Tuberc Respir Dis (Seoul) 2023; 86:251-263. [PMID: 37592751 PMCID: PMC10555525 DOI: 10.4046/trd.2023.0065] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Revised: 08/04/2023] [Accepted: 08/15/2023] [Indexed: 08/19/2023] Open
Abstract
The stethoscope has long been used for the examination of patients, but the importance of auscultation has declined due to its several limitations and the development of other diagnostic tools. However, auscultation is still recognized as a primary diagnostic device because it is non-invasive and provides valuable information in real-time. To supplement the limitations of existing stethoscopes, digital stethoscopes with machine learning (ML) algorithms have been developed. Thus, now we can record and share respiratory sounds and artificial intelligence (AI)-assisted auscultation using ML algorithms distinguishes the type of sounds. Recently, the demands for remote care and non-face-to-face treatment diseases requiring isolation such as coronavirus disease 2019 (COVID-19) infection increased. To address these problems, wireless and wearable stethoscopes are being developed with the advances in battery technology and integrated sensors. This review provides the history of the stethoscope and classification of respiratory sounds, describes ML algorithms, and introduces new auscultation methods based on AI-assisted analysis and wireless or wearable stethoscopes.
Collapse
Affiliation(s)
- Yoonjoo Kim
- Division of Pulmonology, Department of Internal Medicine, Chungnam National University College of Medicine, Daejeon, Republic of Korea
| | - YunKyong Hyon
- Division of Industrial Mathematics, National Institute for Mathematical Sciences, Daejeon, Republic of Korea
| | - Seong-Dae Woo
- Division of Pulmonology, Department of Internal Medicine, Chungnam National University College of Medicine, Daejeon, Republic of Korea
| | - Sunju Lee
- Division of Industrial Mathematics, National Institute for Mathematical Sciences, Daejeon, Republic of Korea
| | - Song-I Lee
- Division of Pulmonology, Department of Internal Medicine, Chungnam National University College of Medicine, Daejeon, Republic of Korea
| | - Taeyoung Ha
- Division of Industrial Mathematics, National Institute for Mathematical Sciences, Daejeon, Republic of Korea
| | - Chaeuk Chung
- Division of Pulmonology, Department of Internal Medicine, Chungnam National University College of Medicine, Daejeon, Republic of Korea
| |
Collapse
|
18
|
Pei Y, Wang G, Cao H, Jiang S, Wang D, Wang H, Wang H, Yu H. A deep-learning pipeline to diagnose pediatric intussusception and assess severity during ultrasound scanning: a multicenter retrospective-prospective study. NPJ Digit Med 2023; 6:182. [PMID: 37775624 PMCID: PMC10541898 DOI: 10.1038/s41746-023-00930-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Accepted: 09/14/2023] [Indexed: 10/01/2023] Open
Abstract
Ileocolic intussusception is one of the common acute abdomens in children and is first diagnosed urgently using ultrasound. Manual diagnosis requires extensive experience and skill, and identifying surgical indications in assessing the disease severity is more challenging. We aimed to develop a real-time lesion visualization deep-learning pipeline to solve this problem. This multicenter retrospective-prospective study used 14,085 images in 8736 consecutive patients (median age, eight months) with ileocolic intussusception who underwent ultrasound at six hospitals to train, validate, and test the deep-learning pipeline. Subsequently, the algorithm was validated in an internal image test set and an external video dataset. Furthermore, the performances of junior, intermediate, senior, and junior sonographers with AI-assistance were prospectively compared in 242 volunteers using the DeLong test. This tool recognized 1,086 images with three ileocolic intussusception signs with an average of the area under the receiver operating characteristic curve (average-AUC) of 0.972. It diagnosed 184 patients with no intussusception, nonsurgical intussusception, and surgical intussusception in 184 ultrasound videos with an average-AUC of 0.956. In the prospective pilot study using 242 volunteers, junior sonographers' performances were significantly improved with AI-assistance (average-AUC: 0.966 vs. 0.857, P < 0.001; median scanning-time: 9.46 min vs. 3.66 min, P < 0.001), which were comparable to those of senior sonographers (average-AUC: 0.966 vs. 0.973, P = 0.600). Thus, here, we report that the deep-learning pipeline that guides lesions in real-time and is interpretable during ultrasound scanning could assist sonographers in improving the accuracy and efficiency of diagnosing intussusception and identifying surgical indications.
Collapse
Affiliation(s)
- Yuanyuan Pei
- Provincial Key Laboratory of Research in Structure Birth Defect Disease and Department of Pediatric Surgery, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangdong Provincial Clinical Research Center for Child Health, Guangzhou, China
| | - Guijuan Wang
- School of Computer Science, South China Normal University, Guangzhou, China
| | - Haiwei Cao
- Ultrasonic Department, Kaifeng Children's Hospital, Kaifeng, China
| | - Shuanglan Jiang
- Ultrasonic Department, Dongguan Children's Hospital, Dongguan, China
| | - Dan Wang
- Ultrasonic Department, Children's Hospital Affiliated to Zhengzhou University, Zhengzhou, China
| | - Haiyu Wang
- Department of Ultrasonography, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China
| | - Hongying Wang
- Department of Ultrasonography, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China.
| | - Hongkui Yu
- Department of Ultrasonography, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China.
- Department of Ultrasonography, Shenzhen Baoan Women's and Children's Hospital, Jinan University, Shenzhen, China.
| |
Collapse
|
19
|
Ramirez Zegarra R, Ghi T. Use of artificial intelligence and deep learning in fetal ultrasound imaging. ULTRASOUND IN OBSTETRICS & GYNECOLOGY : THE OFFICIAL JOURNAL OF THE INTERNATIONAL SOCIETY OF ULTRASOUND IN OBSTETRICS AND GYNECOLOGY 2023; 62:185-194. [PMID: 36436205 DOI: 10.1002/uog.26130] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 11/06/2022] [Accepted: 11/21/2022] [Indexed: 06/16/2023]
Abstract
Deep learning is considered the leading artificial intelligence tool in image analysis in general. Deep-learning algorithms excel at image recognition, which makes them valuable in medical imaging. Obstetric ultrasound has become the gold standard imaging modality for detection and diagnosis of fetal malformations. However, ultrasound relies heavily on the operator's experience, making it unreliable in inexperienced hands. Several studies have proposed the use of deep-learning models as a tool to support sonographers, in an attempt to overcome these problems inherent to ultrasound. Deep learning has many clinical applications in the field of fetal imaging, including identification of normal and abnormal fetal anatomy and measurement of fetal biometry. In this Review, we provide a comprehensive explanation of the fundamentals of deep learning in fetal imaging, with particular focus on its clinical applicability. © 2022 International Society of Ultrasound in Obstetrics and Gynecology.
Collapse
Affiliation(s)
- R Ramirez Zegarra
- Department of Medicine and Surgery, Obstetrics and Gynecology Unit, University of Parma, Parma, Italy
| | - T Ghi
- Department of Medicine and Surgery, Obstetrics and Gynecology Unit, University of Parma, Parma, Italy
| |
Collapse
|
20
|
Horgan R, Nehme L, Abuhamad A. Artificial intelligence in obstetric ultrasound: A scoping review. Prenat Diagn 2023; 43:1176-1219. [PMID: 37503802 DOI: 10.1002/pd.6411] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 06/05/2023] [Accepted: 07/17/2023] [Indexed: 07/29/2023]
Abstract
The objective is to summarize the current use of artificial intelligence (AI) in obstetric ultrasound. PubMed, Cochrane Library, and ClinicalTrials.gov databases were searched using the following keywords "neural networks", OR "artificial intelligence", OR "machine learning", OR "deep learning", AND "obstetrics", OR "obstetrical", OR "fetus", OR "foetus", OR "fetal", OR "foetal", OR "pregnancy", or "pregnant", AND "ultrasound" from inception through May 2022. The search was limited to the English language. Studies were eligible for inclusion if they described the use of AI in obstetric ultrasound. Obstetric ultrasound was defined as the process of obtaining ultrasound images of a fetus, amniotic fluid, or placenta. AI was defined as the use of neural networks, machine learning, or deep learning methods. The authors' search identified a total of 127 papers that fulfilled our inclusion criteria. The current uses of AI in obstetric ultrasound include first trimester pregnancy ultrasound, assessment of placenta, fetal biometry, fetal echocardiography, fetal neurosonography, assessment of fetal anatomy, and other uses including assessment of fetal lung maturity and screening for risk of adverse pregnancy outcomes. AI holds the potential to improve the ultrasound efficiency, pregnancy outcomes in low resource settings, detection of congenital malformations and prediction of adverse pregnancy outcomes.
Collapse
Affiliation(s)
- Rebecca Horgan
- Division of Maternal Fetal Medicine, Department of Obstetrics & Gynecology, Eastern Virginia Medical School, Norfolk, Virginia, USA
| | - Lea Nehme
- Division of Maternal Fetal Medicine, Department of Obstetrics & Gynecology, Eastern Virginia Medical School, Norfolk, Virginia, USA
| | - Alfred Abuhamad
- Division of Maternal Fetal Medicine, Department of Obstetrics & Gynecology, Eastern Virginia Medical School, Norfolk, Virginia, USA
| |
Collapse
|
21
|
Zhen C, Wang H, Cheng J, Yang X, Chen C, Hu X, Zhang Y, Cao Y, Ni D, Huang W, Wang P. Locating Multiple Standard Planes in First-Trimester Ultrasound Videos via the Detection and Scoring of Key Anatomical Structures. ULTRASOUND IN MEDICINE & BIOLOGY 2023:S0301-5629(23)00163-1. [PMID: 37291008 DOI: 10.1016/j.ultrasmedbio.2023.05.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 04/20/2023] [Accepted: 05/10/2023] [Indexed: 06/10/2023]
Abstract
OBJECTIVE This study was aimed at developing a first-trimester standard plane detection (FTSPD) system that can automatically locate nine standard planes in ultrasound videos and investigating its utility in clinical practice. METHODS The FTSPD system, based on the YOLOv3 network, was developed to detect structures and evaluate the quality of plane images by using a pre-defined scoring system. A total of 220 videos from two different ultrasound scanners were collected to compare detection performance between our FTSPD system and sonographers with different levels of experience. The quality of the detected standard planes was quantitatively rated by an expert according to a scoring protocol. Kolmogorov-Smirnov analysis was used to compare the distributions of scores across all nine standard planes. RESULTS The expert-rated scores indicated that the quality of the standard planes detected by the FTSPD system was on par with that of the planes detected by senior sonographers. There were no significant differences in the distributions of the scores across all nine standard planes. The FTSPD system performed significantly better than junior sonographers in five standard plane types. CONCLUSION The results of this study suggest that our FTSPD system has significant potential for detecting standard planes in first-trimester ultrasound screening, which may help to improve the accuracy of fetal ultrasound screening and facilitate early diagnosis of abnormalities. The quality of the standard planes selected by junior sonographers can be significantly improved with the assistance of our FTSPD system.
Collapse
Affiliation(s)
- Chaojiong Zhen
- Department of Ultrasonography, Academy of Orthopedics, Third Affiliated Hospital of Southern Medical University, Guangdong Province, Guangzhou, China; Department of Medical Ultrasonics, First People's Hospital of Foshan, Foshan, China
| | - Hongzhang Wang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, Guangdong, China; Medical Ultrasound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Jun Cheng
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, Guangdong, China; Medical Ultrasound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Xin Yang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, Guangdong, China; Medical Ultrasound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Chaoyu Chen
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, Guangdong, China; Medical Ultrasound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Xindi Hu
- Shenzhen RayShape Medical Technology Co., Ltd, Shenzhen, Guangdong, China
| | - Yuanji Zhang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, Guangdong, China; Medical Ultrasound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Yan Cao
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, Guangdong, China; Medical Ultrasound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Dong Ni
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, Guangdong, China; Medical Ultrasound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Weijun Huang
- Department of Medical Ultrasonics, First People's Hospital of Foshan, Foshan, China
| | - Ping Wang
- Department of Ultrasonography, Academy of Orthopedics, Third Affiliated Hospital of Southern Medical University, Guangdong Province, Guangzhou, China.
| |
Collapse
|
22
|
Xiao S, Zhang J, Zhu Y, Zhang Z, Cao H, Xie M, Zhang L. Application and Progress of Artificial Intelligence in Fetal Ultrasound. J Clin Med 2023; 12:jcm12093298. [PMID: 37176738 PMCID: PMC10179567 DOI: 10.3390/jcm12093298] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Revised: 04/01/2023] [Accepted: 04/28/2023] [Indexed: 05/15/2023] Open
Abstract
Prenatal ultrasonography is the most crucial imaging modality during pregnancy. However, problems such as high fetal mobility, excessive maternal abdominal wall thickness, and inter-observer variability limit the development of traditional ultrasound in clinical applications. The combination of artificial intelligence (AI) and obstetric ultrasound may help optimize fetal ultrasound examination by shortening the examination time, reducing the physician's workload, and improving diagnostic accuracy. AI has been successfully applied to automatic fetal ultrasound standard plane detection, biometric parameter measurement, and disease diagnosis to facilitate conventional imaging approaches. In this review, we attempt to thoroughly review the applications and advantages of AI in prenatal fetal ultrasound and discuss the challenges and promises of this new field.
Collapse
Affiliation(s)
- Sushan Xiao
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Junmin Zhang
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Ye Zhu
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Zisang Zhang
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Haiyan Cao
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Mingxing Xie
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Li Zhang
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| |
Collapse
|
23
|
Yasrab R, Fu Z, Zhao H, Lee LH, Sharma H, Drukker L, Papageorgiou AT, Noble JA. A Machine Learning Method for Automated Description and Workflow Analysis of First Trimester Ultrasound Scans. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1301-1313. [PMID: 36455084 DOI: 10.1109/tmi.2022.3226274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Obstetric ultrasound assessment of fetal anatomy in the first trimester of pregnancy is one of the less explored fields in obstetric sonography because of the paucity of guidelines on anatomical screening and availability of data. This paper, for the first time, examines imaging proficiency and practices of first trimester ultrasound scanning through analysis of full-length ultrasound video scans. Findings from this study provide insights to inform the development of more effective user-machine interfaces, of targeted assistive technologies, as well as improvements in workflow protocols for first trimester scanning. Specifically, this paper presents an automated framework to model operator clinical workflow from full-length routine first-trimester fetal ultrasound scan videos. The 2D+t convolutional neural network-based architecture proposed for video annotation incorporates transfer learning and spatio-temporal (2D+t) modelling to automatically partition an ultrasound video into semantically meaningful temporal segments based on the fetal anatomy detected in the video. The model results in a cross-validation A1 accuracy of 96.10% , F1=0.95 , precision =0.94 and recall =0.95 . Automated semantic partitioning of unlabelled video scans (n=250) achieves a high correlation with expert annotations ( ρ = 0.95, p=0.06 ). Clinical workflow patterns, operator skill and its variability can be derived from the resulting representation using the detected anatomy labels, order, and distribution. It is shown that nuchal translucency (NT) is the toughest standard plane to acquire and most operators struggle to localize high-quality frames. Furthermore, it is found that newly qualified operators spend 25.56% more time on key biometry tasks than experienced operators.
Collapse
|
24
|
Xue H, Yu W, Liu Z, Liu P. Early Pregnancy Fetal Facial Ultrasound Standard Plane-Assisted Recognition Algorithm. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2023. [PMID: 36896480 DOI: 10.1002/jum.16209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Revised: 02/14/2023] [Accepted: 02/14/2023] [Indexed: 06/18/2023]
Abstract
OBJECTIVES Ultrasound screening during early pregnancy is vital in preventing congenital disabilities. For example, nuchal translucency (NT) thickening is associated with fetal chromosomal abnormalities, particularly trisomy 21 and fetal heart malformations. Obtaining accurate ultrasound standard planes of a fetal face during early pregnancy is the key to subsequent biometry and disease diagnosis. Therefore, we propose a lightweight target detection network for early pregnancy fetal facial ultrasound standard plane recognition and quality assessment. METHODS First, a clinical control protocol was developed by ultrasound experts. Second, we constructed a YOLOv4 target detection algorithm based on the backbone network as GhostNet and added attention mechanisms CBAM and CA to the backbone and neck structure. Finally, key anatomical structures in the image were automatically scored according to a clinical control protocol to determine whether they were standard planes. RESULTS We reviewed other detection techniques and found that the proposed method performed well. The average recognition accuracy for six structures was 94.16%, the detection speed was 51 FPS, and the model size was 43.2 MB, and a reduction of 83% compared with the original YOLOv4 model was obtained. The precision for the standard median sagittal plane was 97.20%, and the accuracy for the standard retro-nasal triangle view was 99.07%. CONCLUSIONS The proposed method can better identify standard or non-standard planes from ultrasound image data, providing a theoretical basis for automatic acquisition of standard planes in the prenatal diagnosis of early pregnancy fetuses.
Collapse
Affiliation(s)
- Hao Xue
- College of Engineering, Huaqiao University, Quanzhou, China
| | - Weifeng Yu
- Department of Ultrasound, The First Hospital of Quanzhou Affiliated to Fujian Medical University, Quanzhou, China
| | - Zhonghua Liu
- Department of Ultrasound, The First Hospital of Quanzhou Affiliated to Fujian Medical University, Quanzhou, China
| | - Peizhong Liu
- College of Engineering, Huaqiao University, Quanzhou, China
| |
Collapse
|
25
|
Balagalla UB, Jayasooriya J, de Alwis C, Subasinghe A. Automated segmentation of standard scanning planes to measure biometric parameters in foetal ultrasound images – a survey. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2023. [DOI: 10.1080/21681163.2023.2179343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
Affiliation(s)
- U. B. Balagalla
- Department of Electrical and Electronic Engineering, University of Sri Jayewardenepura, Nugegoda, Sri Lanka
| | - J.V.D. Jayasooriya
- Department of Electrical and Electronic Engineering, University of Sri Jayewardenepura, Nugegoda, Sri Lanka
| | - C. de Alwis
- Department of Electrical and Electronic Engineering, University of Sri Jayewardenepura, Nugegoda, Sri Lanka
| | - A. Subasinghe
- Department of Electrical and Electronic Engineering, University of Sri Jayewardenepura, Nugegoda, Sri Lanka
| |
Collapse
|
26
|
Fiorentino MC, Villani FP, Di Cosmo M, Frontoni E, Moccia S. A review on deep-learning algorithms for fetal ultrasound-image analysis. Med Image Anal 2023; 83:102629. [PMID: 36308861 DOI: 10.1016/j.media.2022.102629] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 07/12/2022] [Accepted: 09/10/2022] [Indexed: 11/07/2022]
Abstract
Deep-learning (DL) algorithms are becoming the standard for processing ultrasound (US) fetal images. A number of survey papers in the field is today available, but most of them are focusing on a broader area of medical-image analysis or not covering all fetal US DL applications. This paper surveys the most recent work in the field, with a total of 153 research papers published after 2017. Papers are analyzed and commented from both the methodology and the application perspective. We categorized the papers into (i) fetal standard-plane detection, (ii) anatomical structure analysis and (iii) biometry parameter estimation. For each category, main limitations and open issues are presented. Summary tables are included to facilitate the comparison among the different approaches. In addition, emerging applications are also outlined. Publicly-available datasets and performance metrics commonly used to assess algorithm performance are summarized, too. This paper ends with a critical summary of the current state of the art on DL algorithms for fetal US image analysis and a discussion on current challenges that have to be tackled by researchers working in the field to translate the research methodology into actual clinical practice.
Collapse
Affiliation(s)
| | | | - Mariachiara Di Cosmo
- Department of Information Engineering, Università Politecnica delle Marche, Italy
| | - Emanuele Frontoni
- Department of Information Engineering, Università Politecnica delle Marche, Italy; Department of Political Sciences, Communication and International Relations, Università degli Studi di Macerata, Italy
| | - Sara Moccia
- The BioRobotics Institute and Department of Excellence in Robotics & AI, Scuola Superiore Sant'Anna, Italy
| |
Collapse
|
27
|
Birlo M, Edwards PJE, Yoo S, Dromey B, Vasconcelos F, Clarkson MJ, Stoyanov D. CAL-Tutor: A HoloLens 2 Application for Training in Obstetric Sonography and User Motion Data Recording. J Imaging 2022; 9:6. [PMID: 36662104 PMCID: PMC9860994 DOI: 10.3390/jimaging9010006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 11/30/2022] [Accepted: 12/20/2022] [Indexed: 12/30/2022] Open
Abstract
Obstetric ultrasound (US) training teaches the relationship between foetal anatomy and the viewed US slice to enable navigation to standardised anatomical planes (head, abdomen and femur) where diagnostic measurements are taken. This process is difficult to learn, and results in considerable inter-operator variability. We propose the CAL-Tutor system for US training based on a US scanner and phantom, where a model of both the baby and the US slice are displayed to the trainee in its physical location using the HoloLens 2. The intention is that AR guidance will shorten the learning curve for US trainees and improve spatial awareness. In addition to the AR guidance, we also record many data streams to assess user motion and the learning process. The HoloLens 2 provides eye gaze, head and hand position, ARToolkit and NDI Aurora tracking gives the US probe positions and an external camera records the overall scene. These data can provide a rich source for further analysis, such as distinguishing expert from novice motion. We have demonstrated the system in a sample of engineers. Feedback suggests that the system helps novice users navigate the US probe to the standard plane. The data capture is successful and initial data visualisations show that meaningful information about user behaviour can be captured. Initial feedback is encouraging and shows improved user assessment where AR guidance is provided.
Collapse
Affiliation(s)
- Manuel Birlo
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), Charles Bell House, 43–45 Foley Street, London W1W 7TY, UK
| | - Philip J. Eddie Edwards
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), Charles Bell House, 43–45 Foley Street, London W1W 7TY, UK
| | - Soojeong Yoo
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), Charles Bell House, 43–45 Foley Street, London W1W 7TY, UK
- UCL Interaction Centre (UCLIC), University College London, 66-72 Gower Street, London WC1E 6EA, UK
| | - Brian Dromey
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), Charles Bell House, 43–45 Foley Street, London W1W 7TY, UK
- UCL EGA Institute for Women’s Health, Medical School Building, 74 Huntley Street, London WC1E 6AU, UK
| | - Francisco Vasconcelos
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), Charles Bell House, 43–45 Foley Street, London W1W 7TY, UK
| | - Matthew J. Clarkson
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), Charles Bell House, 43–45 Foley Street, London W1W 7TY, UK
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), Charles Bell House, 43–45 Foley Street, London W1W 7TY, UK
| |
Collapse
|
28
|
Drukker L, Sharma H, Karim JN, Droste R, Noble JA, Papageorghiou AT. Clinical workflow of sonographers performing fetal anomaly ultrasound scans: deep-learning-based analysis. ULTRASOUND IN OBSTETRICS & GYNECOLOGY : THE OFFICIAL JOURNAL OF THE INTERNATIONAL SOCIETY OF ULTRASOUND IN OBSTETRICS AND GYNECOLOGY 2022; 60:759-765. [PMID: 35726505 PMCID: PMC10107110 DOI: 10.1002/uog.24975] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Revised: 06/04/2022] [Accepted: 06/10/2022] [Indexed: 05/31/2023]
Abstract
OBJECTIVE Despite decades of obstetric scanning, the field of sonographer workflow remains largely unexplored. In the second trimester, sonographers use scan guidelines to guide their acquisition of standard planes and structures; however, the scan-acquisition order is not prescribed. Using deep-learning-based video analysis, the aim of this study was to develop a deeper understanding of the clinical workflow undertaken by sonographers during second-trimester anomaly scans. METHODS We collected prospectively full-length video recordings of routine second-trimester anomaly scans. Important scan events in the videos were identified by detecting automatically image freeze and image/clip save. The video immediately preceding and following the important event was extracted and labeled as one of 11 commonly acquired anatomical structures. We developed and used a purposely trained and tested deep-learning annotation model to label automatically the large number of scan events. Thus, anomaly scans were partitioned as a sequence of anatomical planes or fetal structures obtained over time. RESULTS A total of 496 anomaly scans performed by 14 sonographers were available for analysis. UK guidelines specify that an image or videoclip of five different anatomical regions must be stored and these were detected in the majority of scans: head/brain was detected in 97.2% of scans, coronal face view (nose/lips) in 86.1%, abdomen in 93.1%, spine in 95.0% and femur in 92.3%. Analyzing the clinical workflow, we observed that sonographers were most likely to begin their scan by capturing the head/brain (in 24.4% of scans), spine (in 23.2%) or thorax/heart (in 22.8%). The most commonly identified two-structure transitions were: placenta/amniotic fluid to maternal anatomy, occurring in 44.5% of scans; head/brain to coronal face (nose/lips) in 42.7%; abdomen to thorax/heart in 26.1%; and three-dimensional/four-dimensional face to sagittal face (profile) in 23.7%. Transitions between three or more consecutive structures in sequence were uncommon (up to 13% of scans). None of the captured anomaly scans shared an entirely identical sequence. CONCLUSIONS We present a novel evaluation of the anomaly scan acquisition process using a deep-learning-based analysis of ultrasound video. We note wide variation in the number and sequence of structures obtained during routine second-trimester anomaly scans. Overall, each anomaly scan was found to be unique in its scanning sequence, suggesting that sonographers take advantage of the fetal position and acquire the standard planes according to their visibility rather than following a strict acquisition order. © 2022 The Authors. Ultrasound in Obstetrics & Gynecology published by John Wiley & Sons Ltd on behalf of International Society of Ultrasound in Obstetrics and Gynecology.
Collapse
Affiliation(s)
- L. Drukker
- Nuffield Department of Women's and Reproductive HealthJohn Radcliffe Hospital, University of OxfordOxfordUK
- Women's Ultrasound, Department of Obstetrics and GynecologyBeilinson Medical Center, Sackler Faculty of Medicine, Tel Aviv UniversityTel AvivIsrael
| | - H. Sharma
- Institute of Biomedical EngineeringUniversity of OxfordOxfordUK
| | - J. N. Karim
- Nuffield Department of Women's and Reproductive HealthJohn Radcliffe Hospital, University of OxfordOxfordUK
| | - R. Droste
- Institute of Biomedical EngineeringUniversity of OxfordOxfordUK
| | - J. A. Noble
- Institute of Biomedical EngineeringUniversity of OxfordOxfordUK
| | - A. T. Papageorghiou
- Nuffield Department of Women's and Reproductive HealthJohn Radcliffe Hospital, University of OxfordOxfordUK
| |
Collapse
|
29
|
Yang Y, Shang F, Wu B, Yang D, Wang L, Xu Y, Zhang W, Zhang T. Robust Collaborative Learning of Patch-Level and Image-Level Annotations for Diabetic Retinopathy Grading From Fundus Image. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:11407-11417. [PMID: 33961571 DOI: 10.1109/tcyb.2021.3062638] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Diabetic retinopathy (DR) grading from fundus images has attracted increasing interest in both academic and industrial communities. Most convolutional neural network-based algorithms treat DR grading as a classification task via image-level annotations. However, these algorithms have not fully explored the valuable information in the DR-related lesions. In this article, we present a robust framework, which collaboratively utilizes patch-level and image-level annotations, for DR severity grading. By an end-to-end optimization, this framework can bidirectionally exchange the fine-grained lesion and image-level grade information. As a result, it exploits more discriminative features for DR grading. The proposed framework shows better performance than the recent state-of-the-art algorithms and three clinical ophthalmologists with over nine years of experience. By testing on datasets of different distributions (such as label and camera), we prove that our algorithm is robust when facing image quality and distribution variations that commonly exist in real-world practice. We inspect the proposed framework through extensive ablation studies to indicate the effectiveness and necessity of each motivation. The code and some valuable annotations are now publicly available.
Collapse
|
30
|
Alzubaidi M, Agus M, Alyafei K, Althelaya KA, Shah U, Abd-Alrazaq AA, Anbar M, Makhlouf M, Househ M. Towards deep observation: A systematic survey on artificial intelligence techniques to monitor fetus via Ultrasound Images. iScience 2022; 25:104713. [PMID: 35856024 PMCID: PMC9287600 DOI: 10.1016/j.isci.2022.104713] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 06/09/2022] [Accepted: 06/28/2022] [Indexed: 11/26/2022] Open
Abstract
Several reviews have been conducted regarding artificial intelligence (AI) techniques to improve pregnancy outcomes. But they are not focusing on ultrasound images. This survey aims to explore how AI can assist with fetal growth monitoring via ultrasound image. We reported our findings using the guidelines for PRISMA. We conducted a comprehensive search of eight bibliographic databases. Out of 1269 studies 107 are included. We found that 2D ultrasound images were more popular (88) than 3D and 4D ultrasound images (19). Classification is the most used method (42), followed by segmentation (31), classification integrated with segmentation (16) and other miscellaneous methods such as object-detection, regression, and reinforcement learning (18). The most common areas that gained traction within the pregnancy domain were the fetus head (43), fetus body (31), fetus heart (13), fetus abdomen (10), and the fetus face (10). This survey will promote the development of improved AI models for fetal clinical applications. Artificial intelligence studies to monitor fetal development via ultrasound images Fetal issues categorized based on four categories — general, head, heart, face, abdomen The most used AI techniques are classification, segmentation, object detection, and RL The research and practical implications are included.
Collapse
|
31
|
Huang R, Ying Q, Lin Z, Zheng Z, Tan L, Tang G, Zhang Q, Luo M, Yi X, Liu P, Pan W, Wu J, Luo B, Ni D. Extracting keyframes of Breast Ultrasound Video using Deep Reinforcement Learning. Med Image Anal 2022; 80:102490. [DOI: 10.1016/j.media.2022.102490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Revised: 04/08/2022] [Accepted: 05/20/2022] [Indexed: 10/18/2022]
|
32
|
Wang F, Liang X, Xu L, Lin L. Unifying Relational Sentence Generation and Retrieval for Medical Image Report Composition. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:5015-5025. [PMID: 33119525 DOI: 10.1109/tcyb.2020.3026098] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Beyond generating long and topic-coherent paragraphs in traditional captioning tasks, the medical image report composition task poses more task-oriented challenges by requiring both the highly accurate medical term diagnosis and multiple heterogeneous forms of information, including impression and findings. Current methods often generate the most common sentences due to dataset bias for the individual case, regardless of whether the sentences properly capture key entities and relationships. Such limitations severely hinder their applicability and generalization capability in medical report composition, where the most critical sentences lie in the descriptions of abnormal diseases that are relatively rare. Moreover, some medical terms appearing in one report are often entangled with each other and co-occurred, for example, symptoms associated with a specific disease. To enforce the semantic consistency of medical terms to be incorporated into the final reports and encourage the sentence generation for rare abnormal descriptions, we propose a novel framework that unifies template retrieval and sentence generation to handle both common and rare abnormality while ensuring the semantic coherency among the detected medical terms. Specifically, our approach exploits hybrid-knowledge co-reasoning: 1) explicit relationships among all abnormal medical terms to induce the visual attention learning and topic representation encoding for better topic-oriented symptoms descriptions and 2) adaptive generation mode that changes between the template retrieval and sentence generation according to a contextual topic encoder. The experimental results on two medical report benchmarks demonstrate the superiority of the proposed framework in terms of both human and metrics evaluation.
Collapse
|
33
|
ECAU-Net: Efficient channel attention U-Net for fetal ultrasound cerebellum segmentation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103528] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
34
|
Can AI Automatically Assess Scan Quality of Hip Ultrasound? APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12084072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Ultrasound images can reliably detect Developmental Dysplasia of the Hip (DDH) during early infancy. Accuracy of diagnosis depends on the scan quality, which is subjectively assessed by the sonographer during ultrasound examination. Such assessment is prone to errors and often results in poor-quality scans not being reported, risking misdiagnosis. In this paper, we propose an Artificial Intelligence (AI) technique for automatically determining scan quality. We trained a Convolutional Neural Network (CNN) to categorize 3D Ultrasound (3DUS) hip scans as ‘adequate’ or ‘inadequate’ for diagnosis. We evaluated the performance of this AI technique on two datasets—Dataset 1 (DS1) consisting of 2187 3DUS images in which each image was assessed by one reader for scan quality on a scale of 1 (lowest quality) to 5 (optimal quality) and Dataset 2 (DS2) consisting of 107 3DUS images evaluated semi-quantitatively by four readers using a 10-point scoring system. As a binary classifier (adequate/inadequate), the AI technique gave highly accurate predictions on both datasets (DS1 accuracy = 96% and DS2 accuracy = 91%) and showed high agreement with expert readings in terms of Intraclass Correlation Coefficient (ICC) and Cohen’s kappa coefficient (K). Using our AI-based approach as a screening tool during ultrasound scanning or postprocessing would ensure high scan quality and lead to more reliable ultrasound hip examination in infants.
Collapse
|
35
|
Research on digital media animation control technology based on recurrent neural network using speech technology. INTERNATIONAL JOURNAL OF SYSTEM ASSURANCE ENGINEERING AND MANAGEMENT 2022. [DOI: 10.1007/s13198-021-01540-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
36
|
Lin M, He X, Guo H, He M, Zhang L, Xian J, Lei T, Xu Q, Zheng J, Feng J, Hao C, Yang Y, Wang N, Xie H. Use of real-time artificial intelligence in detection of abnormal image patterns in standard sonographic reference planes in screening for fetal intracranial malformations. ULTRASOUND IN OBSTETRICS & GYNECOLOGY : THE OFFICIAL JOURNAL OF THE INTERNATIONAL SOCIETY OF ULTRASOUND IN OBSTETRICS AND GYNECOLOGY 2022; 59:304-316. [PMID: 34940999 DOI: 10.1002/uog.24843] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/20/2021] [Revised: 11/02/2021] [Accepted: 11/25/2021] [Indexed: 06/14/2023]
Abstract
OBJECTIVES To develop and validate an artificial intelligence system, the Prenatal ultrasound diagnosis Artificial Intelligence Conduct System (PAICS), to detect different patterns of fetal intracranial abnormality in standard sonographic reference planes for screening for congenital central nervous system (CNS) malformations. METHODS Neurosonographic images from normal fetuses and fetuses with CNS malformations at 18-40 gestational weeks were retrieved from the databases of two tertiary hospitals in China and assigned randomly (ratio, 8:1:1) to training, fine-tuning and internal validation datasets to develop and evaluate the PAICS. The system was built based on a real-time convolutional neural network (CNN) algorithm, You Only Look Once, version 3 (YOLOv3). An image dataset from a third tertiary hospital was used to further validate, externally, the performance of the PAICS and to compare its performance with that of sonologists with different levels of expertise. Furthermore, a prospective video dataset was employed to evaluate the performance of the PAICS in a real-time scan scenario. The diagnostic accuracy, sensitivity, specificity and area under the receiver-operating-characteristics curve (AUC) were calculated to assess the performance of the PAICS and to compare this with the performance of sonologists with different levels of experience. RESULTS In total, 43 890 images from 16 297 pregnancies and 169 videos from 166 pregnancies were used to develop and validate the PAICS. The system achieved excellent performance in identifying 10 types of intracranial image pattern, with macro- and microaverage AUCs, respectively, of 0.933 (95% CI, 0.798-1.000) and 0.977 (95% CI, 0.970-0.985) for the internal validation image dataset, 0.902 (95% CI, 0.816-0.989) and 0.898 (95% CI, 0.885-0.911) for the external validation image dataset and 0.969 (95% CI, 0.886-1.000) and 0.981 (95% CI, 0.974-0.988) in the real-time scan setting. The performance of the PAICS was comparable to that of expert sonologists in terms of macro- and microaverage accuracy (P = 0.863 and P = 0.775, respectively), sensitivity (P = 0.883, P = 0.846) and AUC (P = 0.891, P = 0.788), but required significantly less time (0.025 s per image for PAICS vs 4.4 s for experts, P < 0.001). CONCLUSIONS Both in the image dataset and in the real-time scan setting, the PAICS achieved excellent diagnostic performance for various fetal CNS abnormalities. Its performance was comparable to that of experts, but it required less time. A CNN algorithm can be trained to detect fetal CNS abnormalities. The PAICS has the potential to be an effective and efficient tool in screening for fetal CNS malformations in clinical practice. © 2021 International Society of Ultrasound in Obstetrics and Gynecology.
Collapse
Affiliation(s)
- M Lin
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - X He
- Department of Ultrasound, Women and Children's Hospital affiliated to Xiamen University, Fujian, China
| | - H Guo
- Department of Ultrasound, Dongguan Maternal and Child Health Hospital, Dongguan, Guangdong, China
| | - M He
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - L Zhang
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - J Xian
- Guangzhou Aiyunji Information Technology Co., Ltd, Guangdong China & School of Computer Science and Engineering, South China University of Technology, Guangzhou, Guangdong, China
| | - T Lei
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Q Xu
- Department of Ultrasound, Dongguan Maternal and Child Health Hospital, Dongguan, Guangdong, China
| | - J Zheng
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - J Feng
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - C Hao
- Department of Medical Statistics & Sun Yat-sen Global Health Institute, School of Public Health and Institute of State Governance, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Y Yang
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - N Wang
- Guangzhou Aiyunji Information Technology Co., Ltd, Guangdong, China
| | - H Xie
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| |
Collapse
|
37
|
Arroyo J, Marini TJ, Saavedra AC, Toscano M, Baran TM, Drennan K, Dozier A, Zhao YT, Egoavil M, Tamayo L, Ramos B, Castaneda B. No sonographer, no radiologist: New system for automatic prenatal detection of fetal biometry, fetal presentation, and placental location. PLoS One 2022; 17:e0262107. [PMID: 35139093 PMCID: PMC8827457 DOI: 10.1371/journal.pone.0262107] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Accepted: 12/17/2021] [Indexed: 02/06/2023] Open
Abstract
Ultrasound imaging is a vital component of high-quality Obstetric care. In rural and under-resourced communities, the scarcity of ultrasound imaging results in a considerable gap in the healthcare of pregnant mothers. To increase access to ultrasound in these communities, we developed a new automated diagnostic framework operated without an experienced sonographer or interpreting provider for assessment of fetal biometric measurements, fetal presentation, and placental position. This approach involves the use of a standardized volume sweep imaging (VSI) protocol based solely on external body landmarks to obtain imaging without an experienced sonographer and application of a deep learning algorithm (U-Net) for diagnostic assessment without a radiologist. Obstetric VSI ultrasound examinations were performed in Peru by an ultrasound operator with no previous ultrasound experience who underwent 8 hours of training on a standard protocol. The U-Net was trained to automatically segment the fetal head and placental location from the VSI ultrasound acquisitions to subsequently evaluate fetal biometry, fetal presentation, and placental position. In comparison to diagnostic interpretation of VSI acquisitions by a specialist, the U-Net model showed 100% agreement for fetal presentation (Cohen’s κ 1 (p<0.0001)) and 76.7% agreement for placental location (Cohen’s κ 0.59 (p<0.0001)). This corresponded to 100% sensitivity and specificity for fetal presentation and 87.5% sensitivity and 85.7% specificity for anterior placental location. The method also achieved a low relative error of 5.6% for biparietal diameter and 7.9% for head circumference. Biometry measurements corresponded to estimated gestational age within 2 weeks of those assigned by standard of care examination with up to 89% accuracy. This system could be deployed in rural and underserved areas to provide vital information about a pregnancy without a trained sonographer or interpreting provider. The resulting increased access to ultrasound imaging and diagnosis could improve disparities in healthcare delivery in under-resourced areas.
Collapse
Affiliation(s)
- Junior Arroyo
- Laboratorio de Imágenes Médicas, Departamento de Ingeniería, Pontificia Universidad Católica del Perú, Lima, Peru
| | - Thomas J. Marini
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, New York, United States of America
| | - Ana C. Saavedra
- Laboratorio de Imágenes Médicas, Departamento de Ingeniería, Pontificia Universidad Católica del Perú, Lima, Peru
| | - Marika Toscano
- Department of Obstetrics and Gynecology, University of Rochester Medical Center, Rochester, New York, United States of America
| | - Timothy M. Baran
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, New York, United States of America
| | - Kathryn Drennan
- Department of Obstetrics and Gynecology, University of Rochester Medical Center, Rochester, New York, United States of America
| | - Ann Dozier
- Department of Public Health, University of Rochester Medical Center, Rochester, New York, United States of America
| | - Yu Tina Zhao
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, New York, United States of America
| | - Miguel Egoavil
- Research & Development, Medical Innovation & Technology, Lima, Perú
| | - Lorena Tamayo
- Research & Development, Medical Innovation & Technology, Lima, Perú
| | - Berta Ramos
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, New York, United States of America
| | - Benjamin Castaneda
- Laboratorio de Imágenes Médicas, Departamento de Ingeniería, Pontificia Universidad Católica del Perú, Lima, Peru
- * E-mail:
| |
Collapse
|
38
|
Wang Y, Xue T, Li Q. A Robust Image-Sequence-Based Framework for Visual Place Recognition in Changing Environments. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:152-163. [PMID: 32203043 DOI: 10.1109/tcyb.2020.2977128] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
This article proposes a robust image-sequence-based framework to deal with two challenges of visual place recognition in changing environments: 1) viewpoint variations and 2) environmental condition variations. Our framework includes two main parts. The first part is to calculate the distance between two images from a reference image sequence and a query image sequence. In this part, we remove the deep features of nonoverlap contents in these two images and utilize the remaining deep features to calculate the distance. As the deep features of nonoverlap contents are caused by viewpoint variations, removing these deep features can improve the robustness to viewpoint variations. Based on the first part, in the second part, we first calculate the distances of all pairs of images from a reference image sequence and a query image sequence, and obtain a distance matrix. Afterward, we design two convolutional operators to retrieve the distance submatrix with the minimum diagonal distribution. The minimum diagonal distribution contains more environmental information, which is insensitive to environmental condition variations. The experimental results suggest that our framework exhibits better performance than several state-of-the-art methods. Moreover, the analysis of runtime shows that our framework has the potential to satisfy real-time demands.
Collapse
|
39
|
Yu Y, Chen Z, Zhuang Y, Yi H, Han L, Chen K, Lin J. A guiding approach of Ultrasound scan for accurately obtaining standard diagnostic planes of fetal brain malformation. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2022; 30:1243-1260. [PMID: 36155489 DOI: 10.3233/xst-221278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
BACKGROUND Standard planes (SPs) are crucial for the diagnosis of fetal brain malformation. However, it is very time-consuming and requires extensive experiences to acquire the SPs accurately due to the large difference in fetal posture and the complexity of SPs definitions. OBJECTIVE This study aims to present a guiding approach that could assist sonographer to obtain the SPs more accurately and more quickly. METHODS To begin with, sonographer uses the 3D probe to scan the fetal head to obtain 3D volume data, and then we used affine transformation to calibrate 3D volume data to the standard body position and established the corresponding 3D head model in 'real time'. When the sonographer uses the 2D probe to scan a plane, the position of current plane can be clearly show in 3D head model by our RLNet (regression location network), which can conduct the sonographer to obtain the three SPs more accurately. When the three SPs are located, the sagittal plane and the coronal planes can be automatically generated according to the spatial relationship with the three SPs. RESULTS Experimental results conducted on 3200 2D US images show that the RLNet achieves average angle error of the transthalamic plane was 3.91±2.86°, which has a obvious improvement compared other published data. The automatically generated coronal and sagittal SPs conform the diagnostic criteria and the diagnostic requirements of fetal brain malformation. CONCLUSIONS A guiding scanning method based deep learning for ultrasonic brain malformation screening is firstly proposed and it has a pragmatic value for future clinical application.
Collapse
Affiliation(s)
- Yalan Yu
- College of Biomedical Engineering, Sichuan University, Chengdu, China
| | - Zhong Chen
- Department of US, General Hospital of Western Theater Command, Chengdu, China
| | - Yan Zhuang
- College of Biomedical Engineering, Sichuan University, Chengdu, China
| | - Heng Yi
- Department of US, General Hospital of Western Theater Command, Chengdu, China
| | - Lin Han
- College of Biomedical Engineering, Sichuan University, Chengdu, China
- Haihong Intellimage Medical Technology (Tianjin) Co., Ltd, Tianjin, China
| | - Ke Chen
- College of Biomedical Engineering, Sichuan University, Chengdu, China
| | - Jiangli Lin
- College of Biomedical Engineering, Sichuan University, Chengdu, China
| |
Collapse
|
40
|
He F, Wang Y, Xiu Y, Zhang Y, Chen L. Artificial Intelligence in Prenatal Ultrasound Diagnosis. Front Med (Lausanne) 2021; 8:729978. [PMID: 34977053 PMCID: PMC8716504 DOI: 10.3389/fmed.2021.729978] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Accepted: 11/29/2021] [Indexed: 12/12/2022] Open
Abstract
The application of artificial intelligence (AI) technology to medical imaging has resulted in great breakthroughs. Given the unique position of ultrasound (US) in prenatal screening, the research on AI in prenatal US has practical significance with its application to prenatal US diagnosis improving work efficiency, providing quantitative assessments, standardizing measurements, improving diagnostic accuracy, and automating image quality control. This review provides an overview of recent studies that have applied AI technology to prenatal US diagnosis and explains the challenges encountered in these applications.
Collapse
Affiliation(s)
| | | | | | | | - Lizhu Chen
- Department of Ultrasound, Shengjing Hospital of China Medical University, Shenyang, China
| |
Collapse
|
41
|
A robust end-to-end deep learning framework for detecting Martian landforms with arbitrary orientations. Knowl Based Syst 2021. [DOI: 10.1016/j.knosys.2021.107562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
42
|
Huang YJ, Dou Q, Wang ZX, Liu LZ, Jin Y, Li CF, Wang L, Chen H, Xu RH. 3-D RoI-Aware U-Net for Accurate and Efficient Colorectal Tumor Segmentation. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:5397-5408. [PMID: 32248143 DOI: 10.1109/tcyb.2020.2980145] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Segmentation of colorectal cancerous regions from 3-D magnetic resonance (MR) images is a crucial procedure for radiotherapy. Automatic delineation from 3-D whole volumes is in urgent demand yet very challenging. Drawbacks of existing deep-learning-based methods for this task are two-fold: 1) extensive graphics processing unit (GPU) memory footprint of 3-D tensor limits the trainable volume size, shrinks effective receptive field, and therefore, degrades speed and segmentation performance and 2) in-region segmentation methods supported by region-of-interest (RoI) detection are either blind to global contexts, detail richness compromising, or too expensive for 3-D tasks. To tackle these drawbacks, we propose a novel encoder-decoder-based framework for 3-D whole volume segmentation, referred to as 3-D RoI-aware U-Net (3-D RU-Net). 3-D RU-Net fully utilizes the global contexts covering large effective receptive fields. Specifically, the proposed model consists of a global image encoder for global understanding-based RoI localization, and a local region decoder that operates on pyramid-shaped in-region global features, which is GPU memory efficient and thereby enables training and prediction with large 3-D whole volumes. To facilitate the global-to-local learning procedure and enhance contour detail richness, we designed a dice-based multitask hybrid loss function. The efficiency of the proposed framework enables an extensive model ensemble for further performance gain at acceptable extra computational costs. Over a dataset of 64 T2-weighted MR images, the experimental results of four-fold cross-validation show that our method achieved 75.5% dice similarity coefficient (DSC) in 0.61 s per volume on a GPU, which significantly outperforms competing methods in terms of accuracy and efficiency. The code is publicly available.
Collapse
|
43
|
Hareendranathan AR, Chahal BS, Zonoobi D, Sukhdeep D, Jaremko JL. Artificial Intelligence to Automatically Assess Scan Quality in Hip Ultrasound. Indian J Orthop 2021; 55:1535-1542. [PMID: 35003541 PMCID: PMC8688598 DOI: 10.1007/s43465-021-00455-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Accepted: 07/04/2021] [Indexed: 02/04/2023]
Abstract
PURPOSE Since it is fast, inexpensive and increasingly portable, ultrasound can be used for early detection of Developmental Dysplasia of the Hip (DDH) in infants at point-of-care. However, accurate interpretation\is highly dependent on scan quality. Poor-quality images lead to misdiagnosis, but inexperienced users may not even recognize the deficiencies in the images. Currently, users assess scan quality subjectively, based on image landmarks which are prone to human errors. Instead, we propose using Artificial Intelligence (AI) to automatically assess scan quality. METHODS We trained separate Convolutional Neural Network (CNN) models to detect presence of each of four commonly used ultrasound landmarks in each hip image: straight horizontal iliac wing, labrum, os ischium and midportion of the femoral head. We used 100 3D ultrasound (3DUS) images for training and validated the technique on a set of 107 3DUS images also scored for landmarks by three non-expert readers and one expert radiologist. RESULTS We got AI ≥ 85% accuracy for all four landmarks (ilium = 0.89, labrum = 0.94, os ischium = 0.85, femoral head = 0.98) as a binary classifier between adequate and inadequate scan quality. Our technique also showed excellent agreement with manual assessment in terms of Intraclass Correlation Coefficient (ICC) and Cohen's kappa coefficient (K) for ilium (ICC = 0.81, K = 0.56), os ischium (ICC = 0.89, K = 0.63) and femoral head (ICC = 0.83, K = 0.66), and moderate to good agreement for labrum (ICC = 0.65, K = 0.33). CONCLUSION This new technique could ensure high scan quality and facilitate more widespread use of ultrasound in population screening of DDH.
Collapse
Affiliation(s)
| | - Baljot S. Chahal
- grid.17089.37Department of Radiology and Diagnostic Imaging, University of Alberta, Edmonton, T6G 2B7 Canada
| | | | - Dulai Sukhdeep
- grid.17089.37Department of Radiology and Diagnostic Imaging, University of Alberta, Edmonton, T6G 2B7 Canada
| | - Jacob L. Jaremko
- grid.17089.37Department of Radiology and Diagnostic Imaging, University of Alberta, Edmonton, T6G 2B7 Canada ,MEDO.ai Inc, Singapore, Singapore
| |
Collapse
|
44
|
Yang X, Dou H, Huang R, Xue W, Huang Y, Qian J, Zhang Y, Luo H, Guo H, Wang T, Xiong Y, Ni D. Agent With Warm Start and Adaptive Dynamic Termination for Plane Localization in 3D Ultrasound. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1950-1961. [PMID: 33784618 DOI: 10.1109/tmi.2021.3069663] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Accurate standard plane (SP) localization is the fundamental step for prenatal ultrasound (US) diagnosis. Typically, dozens of US SPs are collected to determine the clinical diagnosis. 2D US has to perform scanning for each SP, which is time-consuming and operator-dependent. While 3D US containing multiple SPs in one shot has the inherent advantages of less user-dependency and more efficiency. Automatically locating SP in 3D US is very challenging due to the huge search space and large fetal posture variations. Our previous study proposed a deep reinforcement learning (RL) framework with an alignment module and active termination to localize SPs in 3D US automatically. However, termination of agent search in RL is important and affects the practical deployment. In this study, we enhance our previous RL framework with a newly designed adaptive dynamic termination to enable an early stop for the agent searching, saving at most 67% inference time, thus boosting the accuracy and efficiency of the RL framework at the same time. Besides, we validate the effectiveness and generalizability of our algorithm extensively on our in-house multi-organ datasets containing 433 fetal brain volumes, 519 fetal abdomen volumes, and 683 uterus volumes. Our approach achieves localization error of 2.52mm/10.26° , 2.48mm/10.39° , 2.02mm/10.48° , 2.00mm/14.57° , 2.61mm/9.71° , 3.09mm/9.58° , 1.49mm/7.54° for the transcerebellar, transventricular, transthalamic planes in fetal brain, abdominal plane in fetal abdomen, and mid-sagittal, transverse and coronal planes in uterus, respectively. Experimental results show that our method is general and has the potential to improve the efficiency and standardization of US scanning.
Collapse
|
45
|
Recognition of Fetal Facial Ultrasound Standard Plane Based on Texture Feature Fusion. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:6656942. [PMID: 34188691 PMCID: PMC8195636 DOI: 10.1155/2021/6656942] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/11/2020] [Revised: 04/16/2021] [Accepted: 05/22/2021] [Indexed: 11/21/2022]
Abstract
In the process of prenatal ultrasound diagnosis, accurate identification of fetal facial ultrasound standard plane (FFUSP) is essential for accurate facial deformity detection and disease screening, such as cleft lip and palate detection and Down syndrome screening check. However, the traditional method of obtaining standard planes is manual screening by doctors. Due to different levels of doctors, this method often leads to large errors in the results. Therefore, in this study, we propose a texture feature fusion method (LH-SVM) for automatic recognition and classification of FFUSP. First, extract image's texture features, including Local Binary Pattern (LBP) and Histogram of Oriented Gradient (HOG), then perform feature fusion, and finally adopt Support Vector Machine (SVM) for predictive classification. In our study, we used fetal facial ultrasound images from 20 to 24 weeks of gestation as experimental data for a total of 943 standard plane images (221 ocular axial planes, 298 median sagittal planes, 424 nasolabial coronal planes, and 350 nonstandard planes, OAP, MSP, NCP, N-SP). Based on this data set, we performed five-fold cross-validation. The final test results show that the accuracy rate of the proposed method for FFUSP classification is 94.67%, the average precision rate is 94.27%, the average recall rate is 93.88%, and the average F1 score is 94.08%. The experimental results indicate that the texture feature fusion method can effectively predict and classify FFUSP, which provides an essential basis for clinical research on the automatic detection method of FFUSP.
Collapse
|
46
|
Recognition of Thyroid Ultrasound Standard Plane Images Based on Residual Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:5598001. [PMID: 34188673 PMCID: PMC8192196 DOI: 10.1155/2021/5598001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/17/2021] [Revised: 04/27/2021] [Accepted: 05/14/2021] [Indexed: 01/22/2023]
Abstract
Ultrasound is one of the critical methods for diagnosis and treatment in thyroid examination. In clinical application, many reasons, such as large outpatient traffic, time-consuming training of sonographers, and uneven professional level of physicians, often cause irregularities during the ultrasonic examination, leading to misdiagnosis or missed diagnosis. In order to standardize the thyroid ultrasound examination process, this paper proposes using a deep learning method based on residual network to recognize the Thyroid Ultrasound Standard Plane (TUSP). At first, referring to multiple relevant guidelines, eight TUSP were determined with the advice of clinical ultrasound experts. A total of 5,500 TUSP images of 8 categories were collected with the approval and review of the Ethics Committee and the patient's informed consent. Then, after desensitizing and filling the images, the 18-layer residual network model (ResNet-18) was trained for TUSP image recognition, and five-fold cross-validation was performed. Finally, through indicators like accuracy rate, we compared the recognition effect of other mainstream deep convolutional neural network models. Experimental results showed that ResNet-18 has the best recognition effect on TUSP images with an average accuracy rate of 91.07%. The average macro precision, average macro recall, and average macro F1-score are 91.39%, 91.34%, and 91.30%, respectively. It proves that the deep learning method based on residual network can effectively recognize TUSP images, which is expected to standardize clinical thyroid ultrasound examination and reduce misdiagnosis and missed diagnosis.
Collapse
|
47
|
Gao Y, Liu B, Zhu Y, Chen L, Tan M, Xiao X, Yu G, Guo Y. Detection and recognition of ultrasound breast nodules based on semi-supervised deep learning: a powerful alternative strategy. Quant Imaging Med Surg 2021; 11:2265-2278. [PMID: 34079700 PMCID: PMC8107344 DOI: 10.21037/qims-20-12b] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2020] [Accepted: 01/18/2021] [Indexed: 11/06/2022]
Abstract
BACKGROUND The successful recognition of benign and malignant breast nodules using ultrasound images is based mainly on supervised learning that requires a large number of labeled images. However, because high-quality labeling is expensive and time-consuming, we hypothesized that semi-supervised learning could provide a low-cost and powerful alternative approach. This study aimed to develop an accurate semi-supervised recognition method and compared its performance with supervised methods and sonographers. METHODS The faster region-based convolutional neural network was used for nodule detection from ultrasound images. A semi-supervised classifier based on the mean teacher model was proposed to recognize benign and malignant nodule images. The general performance of the proposed method on two datasets (8,966 nodules) was reported. RESULTS The detection accuracy was 0.88±0.03 and 0.86±0.02, respectively, on two testing sets (1,350 and 2,220 nodules). When 800 labeled training nodules were available, the proposed semi-supervised model plus 4,396 unlabeled nodules performed better than the supervised learning model (area under the curve (AUC): 0.934±0.026 vs. 0.83±0.050; 0.916±0.022 vs. 0.815±0.049). The performance of the semi-supervised model trained on 800 labeled and 4,396 unlabeled nodules was close to that of the supervised learning model trained on a massive number of labeled nodules (n=5,196) (AUC: 0.934±0.026 vs. 0.952±0.027; 0.916±0.022 vs. 0.918±0.017). Moreover, the semi-supervised model was better than the average accuracy of five human sonographers (AUC: 0.922 vs. 0.889). CONCLUSIONS The semi-supervised model can achieve excellent performance for nodule recognition and be useful for medical sciences. The method reduced the number of labeled images required for training, thus significantly alleviating the difficulty in data preparation of medical artificial intelligence.
Collapse
Affiliation(s)
- Yanhua Gao
- Department of Medical Imaging, The First Affiliated Hospital of Xi’an Jiaotong University, Xi’an, China
- Department of Ultrasound, The Third Affiliated Hospital of Xi’an Jiaotong University, Shaanxi Provincial People’s Hospital, Xi’an, China
| | - Bo Liu
- Department of Ultrasound, The Third Affiliated Hospital of Xi’an Jiaotong University, Shaanxi Provincial People’s Hospital, Xi’an, China
| | - Yuan Zhu
- Department of Ultrasound, The Third Affiliated Hospital of Xi’an Jiaotong University, Shaanxi Provincial People’s Hospital, Xi’an, China
| | - Lin Chen
- Department of Pathology, The Third Affiliated Hospital of Xi’an Jiaotong University, Shaanxi Provincial People’s Hospital, Xi’an, China
| | - Miao Tan
- Department of Surgery, The Third Affiliated Hospital of Xi’an Jiaotong University, Shaanxi Provincial People’s Hospital, Xi’an, China
| | - Xiaozhou Xiao
- Department of Biomedical Engineering, School of Basic Medical Science, Central South University, Changsha, China
| | - Gang Yu
- Department of Biomedical Engineering, School of Basic Medical Science, Central South University, Changsha, China
| | - Youmin Guo
- Department of Medical Imaging, The First Affiliated Hospital of Xi’an Jiaotong University, Xi’an, China
| |
Collapse
|
48
|
Yang X, Huang Y, Huang R, Dou H, Li R, Qian J, Huang X, Shi W, Chen C, Zhang Y, Wang H, Xiong Y, Ni D. Searching collaborative agents for multi-plane localization in 3D ultrasound. Med Image Anal 2021; 72:102119. [PMID: 34144345 DOI: 10.1016/j.media.2021.102119] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2020] [Revised: 03/30/2021] [Accepted: 05/14/2021] [Indexed: 11/29/2022]
Abstract
3D ultrasound (US) has become prevalent due to its rich spatial and diagnostic information not contained in 2D US. Moreover, 3D US can contain multiple standard planes (SPs) in one shot. Thus, automatically localizing SPs in 3D US has the potential to improve user-independence and scanning-efficiency. However, manual SP localization in 3D US is challenging because of the low image quality, huge search space and large anatomical variability. In this work, we propose a novel multi-agent reinforcement learning (MARL) framework to simultaneously localize multiple SPs in 3D US. Our contribution is four-fold. First, our proposed method is general and it can accurately localize multiple SPs in different challenging US datasets. Second, we equip the MARL system with a recurrent neural network (RNN) based collaborative module, which can strengthen the communication among agents and learn the spatial relationship among planes effectively. Third, we explore to adopt the neural architecture search (NAS) to automatically design the network architecture of both the agents and the collaborative module. Last, we believe we are the first to realize automatic SP localization in pelvic US volumes, and note that our approach can handle both normal and abnormal uterus cases. Extensively validated on two challenging datasets of the uterus and fetal brain, our proposed method achieves the average localization accuracy of 7.03∘/1.59mm and 9.75∘/1.19mm. Experimental results show that our light-weight MARL model has higher accuracy than state-of-the-art methods.
Collapse
Affiliation(s)
- Xin Yang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical Ultrasound Image Computing (MUSIC) Laboratory, Shenzhen University, Shenzhen, China
| | - Yuhao Huang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical Ultrasound Image Computing (MUSIC) Laboratory, Shenzhen University, Shenzhen, China
| | - Ruobing Huang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical Ultrasound Image Computing (MUSIC) Laboratory, Shenzhen University, Shenzhen, China
| | - Haoran Dou
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical Ultrasound Image Computing (MUSIC) Laboratory, Shenzhen University, Shenzhen, China
| | - Rui Li
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical Ultrasound Image Computing (MUSIC) Laboratory, Shenzhen University, Shenzhen, China
| | - Jikuan Qian
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical Ultrasound Image Computing (MUSIC) Laboratory, Shenzhen University, Shenzhen, China
| | - Xiaoqiong Huang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical Ultrasound Image Computing (MUSIC) Laboratory, Shenzhen University, Shenzhen, China
| | - Wenlong Shi
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical Ultrasound Image Computing (MUSIC) Laboratory, Shenzhen University, Shenzhen, China
| | - Chaoyu Chen
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical Ultrasound Image Computing (MUSIC) Laboratory, Shenzhen University, Shenzhen, China
| | - Yuanji Zhang
- Department of Ultrasound, Luohu People's Hospital, Shenzhen, China
| | - Haixia Wang
- Department of Ultrasound, Luohu People's Hospital, Shenzhen, China
| | - Yi Xiong
- Department of Ultrasound, Luohu People's Hospital, Shenzhen, China.
| | - Dong Ni
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical Ultrasound Image Computing (MUSIC) Laboratory, Shenzhen University, Shenzhen, China.
| |
Collapse
|
49
|
Shen YT, Chen L, Yue WW, Xu HX. Artificial intelligence in ultrasound. Eur J Radiol 2021; 139:109717. [PMID: 33962110 DOI: 10.1016/j.ejrad.2021.109717] [Citation(s) in RCA: 52] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 03/28/2021] [Accepted: 04/11/2021] [Indexed: 12/13/2022]
Abstract
Ultrasound (US), a flexible green imaging modality, is expanding globally as a first-line imaging technique in various clinical fields following with the continual emergence of advanced ultrasonic technologies and the well-established US-based digital health system. Actually, in US practice, qualified physicians should manually collect and visually evaluate images for the detection, identification and monitoring of diseases. The diagnostic performance is inevitably reduced due to the intrinsic property of high operator-dependence from US. In contrast, artificial intelligence (AI) excels at automatically recognizing complex patterns and providing quantitative assessment for imaging data, showing high potential to assist physicians in acquiring more accurate and reproducible results. In this article, we will provide a general understanding of AI, machine learning (ML) and deep learning (DL) technologies; We then review the rapidly growing applications of AI-especially DL technology in the field of US-based on the following anatomical regions: thyroid, breast, abdomen and pelvis, obstetrics heart and blood vessels, musculoskeletal system and other organs by covering image quality control, anatomy localization, object detection, lesion segmentation, and computer-aided diagnosis and prognosis evaluation; Finally, we offer our perspective on the challenges and opportunities for the clinical practice of biomedical AI systems in US.
Collapse
Affiliation(s)
- Yu-Ting Shen
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, Ultrasound Research and Education Institute, Tongji University School of Medicine, Tongji University Cancer Center, Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, National Clnical Research Center of Interventional Medicine, Shanghai, 200072, PR China
| | - Liang Chen
- Department of Gastroenterology, Shanghai Tenth People's Hospital, Tongji University School of Medicine, Shanghai, 200072, PR China
| | - Wen-Wen Yue
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, Ultrasound Research and Education Institute, Tongji University School of Medicine, Tongji University Cancer Center, Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, National Clnical Research Center of Interventional Medicine, Shanghai, 200072, PR China.
| | - Hui-Xiong Xu
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, Ultrasound Research and Education Institute, Tongji University School of Medicine, Tongji University Cancer Center, Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, National Clnical Research Center of Interventional Medicine, Shanghai, 200072, PR China.
| |
Collapse
|
50
|
Dong G, Liu H. Global Receptive-Based Neural Network for Target Recognition in SAR Images. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:1954-1967. [PMID: 31794417 DOI: 10.1109/tcyb.2019.2952400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The past years have witnessed a revival of neural network and learning strategies. These models configure multiple hidden layers hierarchically and require large amounts of labeled samples to estimate the model parameters. It is yet difficult to be met for target recognition under the realistic environments. For either space borne or airborne radars, collecting multiple samples with label information is very expensive and difficult. In addition, the huge computational cost and poor speed of convergence limit the practical applications. To address the problems, this article presents a new thought of receptive, under which a special hierarchy of feedforward neural network has been built. The proposed strategy consists of two sequential modules: 1) feature generation and 2) feature refinement. We first build pairwise baseline signals by means of the Riesz transform along the range and the azimuth, and extend them to a family of receptive signals using the bandpass filter bank. The input SAR image is then generally convoluted with the set of receptive signals to extract the global features. Certain kinds of information can be then exploited. We make the receptive signals predefined, rather than learned automatically, to handle the environment of a small sample size. In addition, the expert knowledge can be transmitted into the neural network. The resulting features are further refined by a special unit, wherein the input neurons and the latent states are bridged by the weights and the bias randomly generated. They are fixed during the training process. On the other hand, we cast the latent state into the Hilbert space, forming the kernel version of refinement. We aim to achieve the comparable or even better performance yet with limited training resources.
Collapse
|