1
|
Lei T, Feng JL, Lin MF, Xie BH, Zhou Q, Wang N, Zheng Q, Yang YD, Guo HM, Xie HN. Development and validation of an artificial intelligence assisted prenatal ultrasonography screening system for trainees. Int J Gynaecol Obstet 2024; 165:306-317. [PMID: 37789758 DOI: 10.1002/ijgo.15167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Revised: 09/10/2023] [Accepted: 09/16/2023] [Indexed: 10/05/2023]
Abstract
OBJECTIVE Fetal anomaly screening via ultrasonography, which involves capturing and interpreting standard views, is highly challenging for inexperienced operators. We aimed to develop and validate a prenatal-screening artificial intelligence system (PSAIS) for real-time evaluation of the quality of anatomical images, indicating existing and missing structures. METHODS Still ultrasonographic images obtained from fetuses of 18-32 weeks of gestation between 2017 and 2018 were used to develop PSAIS based on YOLOv3 with global (anatomic site) and local (structures) feature extraction that could evaluate the image quality and indicate existing and missing structures in the fetal anatomical images. The performance of the PSAIS in recognizing 19 standard views was evaluated using retrospective real-world fetal scan video validation datasets from four hospitals. We stratified sampled frames (standard, similar-to-standard, and background views at approximately 1:1:1) for experts to blindly verify the results. RESULTS The PSAIS was trained using 134 696 images and validated using 836 videos with 12 697 images. For internal and external validations, the multiclass macro-average areas under the receiver operating characteristic curve were 0.943 (95% confidence interval [CI], 0.815-1.000) and 0.958 (0.864-1.000); the micro-average areas were 0.974 (0.970-0.979) and 0.973 (0.965-0.981), respectively. For similar-to-standard views, the PSAIS accurately labeled 90.9% (90.0%-91.4%) with key structures and indicated missing structures. CONCLUSIONS An artificial intelligence system developed to assist trainees in fetal anomaly screening demonstrated high agreement with experts in standard view identification.
Collapse
Affiliation(s)
- Ting Lei
- Department of Ultrasonic Medicine, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Jie Ling Feng
- Department of Ultrasonic Medicine, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Mei Fang Lin
- Department of Ultrasonic Medicine, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Bai Hong Xie
- Guangzhou Aiyunji Information Technology Co., Ltd, Guangzhou, Guangdong, China
| | - Qian Zhou
- Clinical Trials Unit, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Nan Wang
- Guangzhou Aiyunji Information Technology Co., Ltd, Guangzhou, Guangdong, China
| | - Qiao Zheng
- Department of Ultrasonic Medicine, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Yan Dong Yang
- Department of Ultrasonic Medicine, The Sixth Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Hong Mei Guo
- Department of Ultrasonic Medicine, DongGuan City Maternal and Child Health Hospital, DongGuan, China
| | - Hong Ning Xie
- Department of Ultrasonic Medicine, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| |
Collapse
|
2
|
Liu X, Li P, Yang Y, Tian C. Ultrasound-based horizontal ranging in the localization of fetal conus medullaris. Technol Health Care 2024; 32:1371-1382. [PMID: 37781826 PMCID: PMC11091612 DOI: 10.3233/thc-230332] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 08/27/2023] [Indexed: 10/03/2023]
Abstract
BACKGROUND Currently, there are a variety of methods for ultrasound to localize the conus medullaris. A concern is that measured values can be influenced by variations in spinal flexion and extension. OBJECTIVE To overcome this limitation, the present study measures the horizontal distance (HD) between the end of the conus medullaris and the caudal edge of last vertebral body ossification in normal fetus at different gestational weeks, and analyzes the relationship between the measured value and fetal growth, as well as the utility of these measurements in assessing the position of the conus medullaris. METHODS A total of 655 fetuses at gestational weeks 18-40, who underwent routine prenatal ultrasound, were selected in the study. We measured the distance between the end of the cone of the fetal spinal cord and the caudal end of the final vertebral ossification center (Distance1, D1), the distance between the end of the spinal cord cone and the intersection of the extension of D1 with the caudal skin (Distance2, D2), and HD. We analyzed the correlation between the measurements and gestational weeks, established normal reference values, the ratio of D1, D2 and HD to the commonly used growth parameters was calculated. The ratios of D1, D2, HD and the application value of each ratio phase were analyzed, and the reliability analysis of repeated measurement results among physicians was performed. RESULTS D1, D2 and HD exhibited strong linear correlations with gestational weeks. Among the ratios of D1, D2 and HD to common growth parameters, D2/FL stabilized after 20 weeks of gestation and consistently exceeded 1. Repeatability tests between D1, D2 and HD showed good reliability (P> 0.05). CONCLUSION D1, D2 and HD are significantly correlated with gestational age. Horizontal distance measurement can effectively determine the position of fetal conus medullaris, enabling rapid prenatal evaluation of low position of conus medullaris and excluding the possibility of tethered cord.
Collapse
Affiliation(s)
- Xiuping Liu
- Department of Obstetrics and Gynecology, Hebei Medical University Third Hospital, Shijiazhuang, Hebei, China
| | - Ping Li
- Department of Obstetrics and Gynecology, Hebei Medical University Third Hospital, Shijiazhuang, Hebei, China
| | - Yuemin Yang
- Department of Obstetrics and Gynecology, Hebei Medical University Third Hospital, Shijiazhuang, Hebei, China
| | - Cheng Tian
- Department of Ultrasound, Hebei Medical University Third Hospital, Shijiazhuang, Hebei, China
| |
Collapse
|
3
|
Jost E, Kosian P, Jimenez Cruz J, Albarqouni S, Gembruch U, Strizek B, Recker F. Evolving the Era of 5D Ultrasound? A Systematic Literature Review on the Applications for Artificial Intelligence Ultrasound Imaging in Obstetrics and Gynecology. J Clin Med 2023; 12:6833. [PMID: 37959298 PMCID: PMC10649694 DOI: 10.3390/jcm12216833] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 10/17/2023] [Accepted: 10/25/2023] [Indexed: 11/15/2023] Open
Abstract
Artificial intelligence (AI) has gained prominence in medical imaging, particularly in obstetrics and gynecology (OB/GYN), where ultrasound (US) is the preferred method. It is considered cost effective and easily accessible but is time consuming and hindered by the need for specialized training. To overcome these limitations, AI models have been proposed for automated plane acquisition, anatomical measurements, and pathology detection. This study aims to overview recent literature on AI applications in OB/GYN US imaging, highlighting their benefits and limitations. For the methodology, a systematic literature search was performed in the PubMed and Cochrane Library databases. Matching abstracts were screened based on the PICOS (Participants, Intervention or Exposure, Comparison, Outcome, Study type) scheme. Articles with full text copies were distributed to the sections of OB/GYN and their research topics. As a result, this review includes 189 articles published from 1994 to 2023. Among these, 148 focus on obstetrics and 41 on gynecology. AI-assisted US applications span fetal biometry, echocardiography, or neurosonography, as well as the identification of adnexal and breast masses, and assessment of the endometrium and pelvic floor. To conclude, the applications for AI-assisted US in OB/GYN are abundant, especially in the subspecialty of obstetrics. However, while most studies focus on common application fields such as fetal biometry, this review outlines emerging and still experimental fields to promote further research.
Collapse
Affiliation(s)
- Elena Jost
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Philipp Kosian
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Jorge Jimenez Cruz
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Shadi Albarqouni
- Department of Diagnostic and Interventional Radiology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
- Helmholtz AI, Helmholtz Munich, Ingolstädter Landstraße 1, 85764 Neuherberg, Germany
| | - Ulrich Gembruch
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Brigitte Strizek
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Florian Recker
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| |
Collapse
|
4
|
Ramirez Zegarra R, Ghi T. Use of artificial intelligence and deep learning in fetal ultrasound imaging. ULTRASOUND IN OBSTETRICS & GYNECOLOGY : THE OFFICIAL JOURNAL OF THE INTERNATIONAL SOCIETY OF ULTRASOUND IN OBSTETRICS AND GYNECOLOGY 2023; 62:185-194. [PMID: 36436205 DOI: 10.1002/uog.26130] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 11/06/2022] [Accepted: 11/21/2022] [Indexed: 06/16/2023]
Abstract
Deep learning is considered the leading artificial intelligence tool in image analysis in general. Deep-learning algorithms excel at image recognition, which makes them valuable in medical imaging. Obstetric ultrasound has become the gold standard imaging modality for detection and diagnosis of fetal malformations. However, ultrasound relies heavily on the operator's experience, making it unreliable in inexperienced hands. Several studies have proposed the use of deep-learning models as a tool to support sonographers, in an attempt to overcome these problems inherent to ultrasound. Deep learning has many clinical applications in the field of fetal imaging, including identification of normal and abnormal fetal anatomy and measurement of fetal biometry. In this Review, we provide a comprehensive explanation of the fundamentals of deep learning in fetal imaging, with particular focus on its clinical applicability. © 2022 International Society of Ultrasound in Obstetrics and Gynecology.
Collapse
Affiliation(s)
- R Ramirez Zegarra
- Department of Medicine and Surgery, Obstetrics and Gynecology Unit, University of Parma, Parma, Italy
| | - T Ghi
- Department of Medicine and Surgery, Obstetrics and Gynecology Unit, University of Parma, Parma, Italy
| |
Collapse
|
5
|
Xiao S, Zhang J, Zhu Y, Zhang Z, Cao H, Xie M, Zhang L. Application and Progress of Artificial Intelligence in Fetal Ultrasound. J Clin Med 2023; 12:jcm12093298. [PMID: 37176738 PMCID: PMC10179567 DOI: 10.3390/jcm12093298] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Revised: 04/01/2023] [Accepted: 04/28/2023] [Indexed: 05/15/2023] Open
Abstract
Prenatal ultrasonography is the most crucial imaging modality during pregnancy. However, problems such as high fetal mobility, excessive maternal abdominal wall thickness, and inter-observer variability limit the development of traditional ultrasound in clinical applications. The combination of artificial intelligence (AI) and obstetric ultrasound may help optimize fetal ultrasound examination by shortening the examination time, reducing the physician's workload, and improving diagnostic accuracy. AI has been successfully applied to automatic fetal ultrasound standard plane detection, biometric parameter measurement, and disease diagnosis to facilitate conventional imaging approaches. In this review, we attempt to thoroughly review the applications and advantages of AI in prenatal fetal ultrasound and discuss the challenges and promises of this new field.
Collapse
Affiliation(s)
- Sushan Xiao
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Junmin Zhang
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Ye Zhu
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Zisang Zhang
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Haiyan Cao
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Mingxing Xie
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Li Zhang
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| |
Collapse
|
6
|
Sarno L, Neola D, Carbone L, Saccone G, Carlea A, Miceli M, Iorio GG, Mappa I, Rizzo G, Girolamo RD, D'Antonio F, Guida M, Maruotti GM. Use of artificial intelligence in obstetrics: not quite ready for prime time. Am J Obstet Gynecol MFM 2023; 5:100792. [PMID: 36356939 DOI: 10.1016/j.ajogmf.2022.100792] [Citation(s) in RCA: 17] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Revised: 10/18/2022] [Accepted: 10/28/2022] [Indexed: 11/09/2022]
Abstract
Artificial intelligence is finding several applications in healthcare settings. This study aimed to report evidence on the effectiveness of artificial intelligence application in obstetrics. Through a narrative review of literature, we described artificial intelligence use in different obstetrical areas as follows: prenatal diagnosis, fetal heart monitoring, prediction and management of pregnancy-related complications (preeclampsia, preterm birth, gestational diabetes mellitus, and placenta accreta spectrum), and labor. Artificial intelligence seems to be a promising tool to help clinicians in daily clinical activity. The main advantages that emerged from this review are related to the reduction of inter- and intraoperator variability, time reduction of procedures, and improvement of overall diagnostic performance. However, nowadays, the diffusion of these systems in routine clinical practice raises several issues. Reported evidence is still very limited, and further studies are needed to confirm the clinical applicability of artificial intelligence. Moreover, better training of clinicians designed to use these systems should be ensured, and evidence-based guidelines regarding this topic should be produced to enhance the strengths of artificial systems and minimize their limits.
Collapse
Affiliation(s)
- Laura Sarno
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Daniele Neola
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida).
| | - Luigi Carbone
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Gabriele Saccone
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Annunziata Carlea
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Marco Miceli
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida); CEINGE Biotecnologie Avanzate, Naples, Italy (Dr Miceli)
| | - Giuseppe Gabriele Iorio
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Ilenia Mappa
- Division of Maternal Fetal Medicine, Department of Obstetrics and Gynecology, University of Rome Tor Vergata, Rome, Italy (Dr Mappa and Dr Rizzo)
| | - Giuseppe Rizzo
- Division of Maternal Fetal Medicine, Department of Obstetrics and Gynecology, University of Rome Tor Vergata, Rome, Italy (Dr Mappa and Dr Rizzo)
| | - Raffaella Di Girolamo
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Francesco D'Antonio
- Center for Fetal Care and High Risk Pregnancy, Department of Obstetrics and Gynecology, University G. D'Annunzio of Chieti-Pescara, Chieti, Italy (Dr D'Antonio)
| | - Maurizio Guida
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Giuseppe Maria Maruotti
- Gynecology and Obstetrics Unit, Department of Public Health, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Maruotti)
| |
Collapse
|
7
|
Lee C, Willis A, Chen C, Sieniek M, Watters A, Stetson B, Uddin A, Wong J, Pilgrim R, Chou K, Tse D, Shetty S, Gomes RG. Development of a Machine Learning Model for Sonographic Assessment of Gestational Age. JAMA Netw Open 2023; 6:e2248685. [PMID: 36598790 PMCID: PMC9857195 DOI: 10.1001/jamanetworkopen.2022.48685] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/05/2023] Open
Abstract
IMPORTANCE Fetal ultrasonography is essential for confirmation of gestational age (GA), and accurate GA assessment is important for providing appropriate care throughout pregnancy and for identifying complications, including fetal growth disorders. Derivation of GA from manual fetal biometry measurements (ie, head, abdomen, and femur) is operator dependent and time-consuming. OBJECTIVE To develop artificial intelligence (AI) models to estimate GA with higher accuracy and reliability, leveraging standard biometry images and fly-to ultrasonography videos. DESIGN, SETTING, AND PARTICIPANTS To improve GA estimates, this diagnostic study used AI to interpret standard plane ultrasonography images and fly-to ultrasonography videos, which are 5- to 10-second videos that can be automatically recorded as part of the standard of care before the still image is captured. Three AI models were developed and validated: (1) an image model using standard plane images, (2) a video model using fly-to videos, and (3) an ensemble model (combining both image and video models). The models were trained and evaluated on data from the Fetal Age Machine Learning Initiative (FAMLI) cohort, which included participants from 2 study sites at Chapel Hill, North Carolina (US), and Lusaka, Zambia. Participants were eligible to be part of this study if they received routine antenatal care at 1 of these sites, were aged 18 years or older, had a viable intrauterine singleton pregnancy, and could provide written consent. They were not eligible if they had known uterine or fetal abnormality, or had any other conditions that would make participation unsafe or complicate interpretation. Data analysis was performed from January to July 2022. MAIN OUTCOMES AND MEASURES The primary analysis outcome for GA was the mean difference in absolute error between the GA model estimate and the clinical standard estimate, with the ground truth GA extrapolated from the initial GA estimated at an initial examination. RESULTS Of the total cohort of 3842 participants, data were calculated for a test set of 404 participants with a mean (SD) age of 28.8 (5.6) years at enrollment. All models were statistically superior to standard fetal biometry-based GA estimates derived from images captured by expert sonographers. The ensemble model had the lowest mean absolute error compared with the clinical standard fetal biometry (mean [SD] difference, -1.51 [3.96] days; 95% CI, -1.90 to -1.10 days). All 3 models outperformed standard biometry by a more substantial margin on fetuses that were predicted to be small for their GA. CONCLUSIONS AND RELEVANCE These findings suggest that AI models have the potential to empower trained operators to estimate GA with higher accuracy.
Collapse
Affiliation(s)
- Chace Lee
- Google Health, Palo Alto, California
| | | | | | | | - Amber Watters
- Department of Obstetrics and Gynecology, Northwestern University Feinberg School of Medicine, Chicago, Illinois
| | - Bethany Stetson
- Department of Obstetrics and Gynecology, Northwestern University Feinberg School of Medicine, Chicago, Illinois
| | | | | | | | | | | | | | | |
Collapse
|
8
|
A Novel Lightweight Deep Learning-Based Histopathological Image Classification Model for IoMT. Neural Process Lett 2023; 55:205-228. [PMID: 34121912 PMCID: PMC8185315 DOI: 10.1007/s11063-021-10555-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/02/2021] [Indexed: 11/24/2022]
Abstract
The unavailability of appropriate mechanisms for timely detection of diseases and successive treatment causes the death of a large number of people around the globe. The timely diagnosis of grave diseases like different forms of cancer and other life-threatening diseases can save a valuable life or at least extend the life span of an afflicted individual. The advancement of the Internet of Medical Things (IoMT) enabled healthcare technologies can provide effective medical facilities to the population and contribute greatly towards the recuperation of patients. The usage of IoMT in the diagnosis and study of histopathological images can enable real-time identification of diseases and corresponding remedial actions can be taken to save an affected individual. This can be achieved by the use of imaging apparatus with the capacity of auto-analysis of captured images. However, most deep learning-based image classifying models are bulk in size and are inappropriate for use in IoT based imaging devices. The objective of this research work is to design a deep learning-based lightweight model suitable for histopathological image analysis with appreciable accuracy. This paper presents a novel lightweight deep learning-based model "ReducedFireNet", for auto-classification of histopathological images. The proposed method attained a mean accuracy of 96.88% and an F1 score of 0.968 on evaluating an actual histopathological image data set. The results are encouraging, considering the complexity of histopathological images. In addition to the high accuracy the lightweight design (size in few KBs) of the ReducedFireNet model, makes it suitable for IoMT imaging equipment. The simulation results show the proposed model has computational requirement of 0.201 GFLOPS and has a mere size of only 0.391 MB.
Collapse
|
9
|
Fiorentino MC, Villani FP, Di Cosmo M, Frontoni E, Moccia S. A review on deep-learning algorithms for fetal ultrasound-image analysis. Med Image Anal 2023; 83:102629. [PMID: 36308861 DOI: 10.1016/j.media.2022.102629] [Citation(s) in RCA: 29] [Impact Index Per Article: 29.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 07/12/2022] [Accepted: 09/10/2022] [Indexed: 11/07/2022]
Abstract
Deep-learning (DL) algorithms are becoming the standard for processing ultrasound (US) fetal images. A number of survey papers in the field is today available, but most of them are focusing on a broader area of medical-image analysis or not covering all fetal US DL applications. This paper surveys the most recent work in the field, with a total of 153 research papers published after 2017. Papers are analyzed and commented from both the methodology and the application perspective. We categorized the papers into (i) fetal standard-plane detection, (ii) anatomical structure analysis and (iii) biometry parameter estimation. For each category, main limitations and open issues are presented. Summary tables are included to facilitate the comparison among the different approaches. In addition, emerging applications are also outlined. Publicly-available datasets and performance metrics commonly used to assess algorithm performance are summarized, too. This paper ends with a critical summary of the current state of the art on DL algorithms for fetal US image analysis and a discussion on current challenges that have to be tackled by researchers working in the field to translate the research methodology into actual clinical practice.
Collapse
Affiliation(s)
| | | | - Mariachiara Di Cosmo
- Department of Information Engineering, Università Politecnica delle Marche, Italy
| | - Emanuele Frontoni
- Department of Information Engineering, Università Politecnica delle Marche, Italy; Department of Political Sciences, Communication and International Relations, Università degli Studi di Macerata, Italy
| | - Sara Moccia
- The BioRobotics Institute and Department of Excellence in Robotics & AI, Scuola Superiore Sant'Anna, Italy
| |
Collapse
|
10
|
Lee S, Kang M, Byeon K, Lee SE, Lee IH, Kim YA, Kang SW, Park JT. Machine Learning-Aided Chronic Kidney Disease Diagnosis Based on Ultrasound Imaging Integrated with Computer-Extracted Measurable Features. J Digit Imaging 2022; 35:1091-1100. [PMID: 35411524 PMCID: PMC9582094 DOI: 10.1007/s10278-022-00625-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 03/24/2022] [Accepted: 03/26/2022] [Indexed: 11/27/2022] Open
Abstract
Although ultrasound plays an important role in the diagnosis of chronic kidney disease (CKD), image interpretation requires extensive training. High operator variability and limited image quality control of ultrasound images have made the application of computer-aided diagnosis (CAD) challenging. This study assessed the effect of integrating computer-extracted measurable features with the convolutional neural network (CNN) on the ultrasound image CAD accuracy of CKD. Ultrasound images from patients who visited Severance Hospital and Gangnam Severance Hospital in South Korea between 2011 and 2018 were used. A Mask regional CNN model was used for organ segmentation and measurable feature extraction. Data on kidney length and kidney-to-liver echogenicity ratio were extracted. The ResNet18 model classified kidney ultrasound images into CKD and non-CKD. Experiments were conducted with and without the input of the measurable feature data. The performance of each model was evaluated using the area under the receiver operating characteristic curve (AUROC). A total of 909 patients (mean age, 51.4 ± 19.3 years; 414 [49.5%] men and 495 [54.5%] women) were included in the study. The average AUROC from the model trained using ultrasound images achieved a level of 0.81. Image training with the integration of automatically extracted kidney length and echogenicity features revealed an improved average AUROC of 0.88. This value further increased to 0.91 when the clinical information of underlying diabetes was also included in the model trained with CNN and measurable features. The automated step-wise machine learning-aided model segmented, measured, and classified the kidney ultrasound images with high performance. The integration of computer-extracted measurable features into the machine learning model may improve CKD classification.
Collapse
Affiliation(s)
- Sangmi Lee
- Department of Internal Medicine, College of Medicine, Institute of Kidney Disease Research, Yonsei University, Seoul, Korea
| | | | | | - Sang Eun Lee
- Department of Preventive Medicine, Yonsei University College of Medicine, Seoul, Republic of Korea
- Biostatics Collaboration Unit, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - In Ho Lee
- AI Team, INFINYX, Daegu, Republic of Korea
| | - Young Ah Kim
- Department of Medical Informatics, Yonsei University Health System, Seoul, Korea
| | - Shin-Wook Kang
- Department of Internal Medicine, College of Medicine, Institute of Kidney Disease Research, Yonsei University, Seoul, Korea
| | - Jung Tak Park
- Department of Internal Medicine, College of Medicine, Institute of Kidney Disease Research, Yonsei University, Seoul, Korea.
- Department of Internal Medicine, Severance Hospital, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul, 120-752, Korea.
| |
Collapse
|
11
|
Alzubaidi M, Agus M, Alyafei K, Althelaya KA, Shah U, Abd-Alrazaq AA, Anbar M, Makhlouf M, Househ M. Towards deep observation: A systematic survey on artificial intelligence techniques to monitor fetus via Ultrasound Images. iScience 2022; 25:104713. [PMID: 35856024 PMCID: PMC9287600 DOI: 10.1016/j.isci.2022.104713] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 06/09/2022] [Accepted: 06/28/2022] [Indexed: 11/26/2022] Open
Abstract
Several reviews have been conducted regarding artificial intelligence (AI) techniques to improve pregnancy outcomes. But they are not focusing on ultrasound images. This survey aims to explore how AI can assist with fetal growth monitoring via ultrasound image. We reported our findings using the guidelines for PRISMA. We conducted a comprehensive search of eight bibliographic databases. Out of 1269 studies 107 are included. We found that 2D ultrasound images were more popular (88) than 3D and 4D ultrasound images (19). Classification is the most used method (42), followed by segmentation (31), classification integrated with segmentation (16) and other miscellaneous methods such as object-detection, regression, and reinforcement learning (18). The most common areas that gained traction within the pregnancy domain were the fetus head (43), fetus body (31), fetus heart (13), fetus abdomen (10), and the fetus face (10). This survey will promote the development of improved AI models for fetal clinical applications. Artificial intelligence studies to monitor fetal development via ultrasound images Fetal issues categorized based on four categories — general, head, heart, face, abdomen The most used AI techniques are classification, segmentation, object detection, and RL The research and practical implications are included.
Collapse
|
12
|
Płotka S, Klasa A, Lisowska A, Seliga-Siwecka J, Lipa M, Trzciński T, Sitek A. Deep learning fetal ultrasound video model match human observers in biometric measurements. Phys Med Biol 2022; 67. [PMID: 35051921 DOI: 10.1088/1361-6560/ac4d85] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2021] [Accepted: 01/20/2022] [Indexed: 11/11/2022]
Abstract
Objective.This work investigates the use of deep convolutional neural networks (CNN) to automatically perform measurements of fetal body parts, including head circumference, biparietal diameter, abdominal circumference and femur length, and to estimate gestational age and fetal weight using fetal ultrasound videos.Approach.We developed a novel multi-task CNN-based spatio-temporal fetal US feature extraction and standard plane detection algorithm (called FUVAI) and evaluated the method on 50 freehand fetal US video scans. We compared FUVAI fetal biometric measurements with measurements made by five experienced sonographers at two time points separated by at least two weeks. Intra- and inter-observer variabilities were estimated.Main results.We found that automated fetal biometric measurements obtained by FUVAI were comparable to the measurements performed by experienced sonographers The observed differences in measurement values were within the range of inter- and intra-observer variability. Moreover, analysis has shown that these differences were not statistically significant when comparing any individual medical expert to our model.Significance.We argue that FUVAI has the potential to assist sonographers who perform fetal biometric measurements in clinical settings by providing them with suggestions regarding the best measuring frames, along with automated measurements. Moreover, FUVAI is able perform these tasks in just a few seconds, which is a huge difference compared to the average of six minutes taken by sonographers. This is significant, given the shortage of medical experts capable of interpreting fetal ultrasound images in numerous countries.
Collapse
Affiliation(s)
- Szymon Płotka
- Sano Centre for Computational Medicine, Czarnowiejska 36, 30-054 Cracow, Poland.,Faculty of Electronics and Information Technology, Warsaw University of Technology, Nowowiejska 15/19, 00-665 Warsaw, Poland.,Fetai Health Ltd., Warsaw, Poland
| | | | - Aneta Lisowska
- Sano Centre for Computational Medicine, Czarnowiejska 36, 30-054 Cracow, Poland.,Poznan University of Technology, Piotrowo 3, 60-965 Poznan, Poland
| | | | - Michał Lipa
- 1st Department of Obstetrics and Gynecology, Medical University of Warsaw, Plac Starynkiewicza 1/3, 02-015 Warsaw, Poland
| | - Tomasz Trzciński
- Faculty of Electronics and Information Technology, Warsaw University of Technology, Nowowiejska 15/19, 00-665 Warsaw, Poland.,Jagiellonian University, Prof. Stanisława Łojosiewicza 6, 30-348 Cracow, Poland
| | - Arkadiusz Sitek
- Sano Centre for Computational Medicine, Czarnowiejska 36, 30-054 Cracow, Poland
| |
Collapse
|
13
|
He F, Wang Y, Xiu Y, Zhang Y, Chen L. Artificial Intelligence in Prenatal Ultrasound Diagnosis. Front Med (Lausanne) 2021; 8:729978. [PMID: 34977053 PMCID: PMC8716504 DOI: 10.3389/fmed.2021.729978] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Accepted: 11/29/2021] [Indexed: 12/12/2022] Open
Abstract
The application of artificial intelligence (AI) technology to medical imaging has resulted in great breakthroughs. Given the unique position of ultrasound (US) in prenatal screening, the research on AI in prenatal US has practical significance with its application to prenatal US diagnosis improving work efficiency, providing quantitative assessments, standardizing measurements, improving diagnostic accuracy, and automating image quality control. This review provides an overview of recent studies that have applied AI technology to prenatal US diagnosis and explains the challenges encountered in these applications.
Collapse
Affiliation(s)
| | | | | | | | - Lizhu Chen
- Department of Ultrasound, Shengjing Hospital of China Medical University, Shenyang, China
| |
Collapse
|
14
|
Komatsu M, Sakai A, Dozen A, Shozu K, Yasutomi S, Machino H, Asada K, Kaneko S, Hamamoto R. Towards Clinical Application of Artificial Intelligence in Ultrasound Imaging. Biomedicines 2021; 9:720. [PMID: 34201827 PMCID: PMC8301304 DOI: 10.3390/biomedicines9070720] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2021] [Revised: 06/13/2021] [Accepted: 06/18/2021] [Indexed: 12/12/2022] Open
Abstract
Artificial intelligence (AI) is being increasingly adopted in medical research and applications. Medical AI devices have continuously been approved by the Food and Drug Administration in the United States and the responsible institutions of other countries. Ultrasound (US) imaging is commonly used in an extensive range of medical fields. However, AI-based US imaging analysis and its clinical implementation have not progressed steadily compared to other medical imaging modalities. The characteristic issues of US imaging owing to its manual operation and acoustic shadows cause difficulties in image quality control. In this review, we would like to introduce the global trends of medical AI research in US imaging from both clinical and basic perspectives. We also discuss US image preprocessing, ingenious algorithms that are suitable for US imaging analysis, AI explainability for obtaining informed consent, the approval process of medical AI devices, and future perspectives towards the clinical application of AI-based US diagnostic support technologies.
Collapse
Affiliation(s)
- Masaaki Komatsu
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Akira Sakai
- Artificial Intelligence Laboratory, Research Unit, Fujitsu Research, Fujitsu Ltd., 4-1-1 Kamikodanaka, Nakahara-ku, Kawasaki, Kanagawa 211-8588, Japan; (A.S.); (S.Y.)
- RIKEN AIP—Fujitsu Collaboration Center, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
- Biomedical Science and Engineering Track, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Ai Dozen
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Kanto Shozu
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Suguru Yasutomi
- Artificial Intelligence Laboratory, Research Unit, Fujitsu Research, Fujitsu Ltd., 4-1-1 Kamikodanaka, Nakahara-ku, Kawasaki, Kanagawa 211-8588, Japan; (A.S.); (S.Y.)
- RIKEN AIP—Fujitsu Collaboration Center, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Hidenori Machino
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Ken Asada
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Syuzo Kaneko
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Ryuji Hamamoto
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
- Biomedical Science and Engineering Track, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| |
Collapse
|
15
|
Shen YT, Chen L, Yue WW, Xu HX. Artificial intelligence in ultrasound. Eur J Radiol 2021; 139:109717. [PMID: 33962110 DOI: 10.1016/j.ejrad.2021.109717] [Citation(s) in RCA: 52] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 03/28/2021] [Accepted: 04/11/2021] [Indexed: 12/13/2022]
Abstract
Ultrasound (US), a flexible green imaging modality, is expanding globally as a first-line imaging technique in various clinical fields following with the continual emergence of advanced ultrasonic technologies and the well-established US-based digital health system. Actually, in US practice, qualified physicians should manually collect and visually evaluate images for the detection, identification and monitoring of diseases. The diagnostic performance is inevitably reduced due to the intrinsic property of high operator-dependence from US. In contrast, artificial intelligence (AI) excels at automatically recognizing complex patterns and providing quantitative assessment for imaging data, showing high potential to assist physicians in acquiring more accurate and reproducible results. In this article, we will provide a general understanding of AI, machine learning (ML) and deep learning (DL) technologies; We then review the rapidly growing applications of AI-especially DL technology in the field of US-based on the following anatomical regions: thyroid, breast, abdomen and pelvis, obstetrics heart and blood vessels, musculoskeletal system and other organs by covering image quality control, anatomy localization, object detection, lesion segmentation, and computer-aided diagnosis and prognosis evaluation; Finally, we offer our perspective on the challenges and opportunities for the clinical practice of biomedical AI systems in US.
Collapse
Affiliation(s)
- Yu-Ting Shen
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, Ultrasound Research and Education Institute, Tongji University School of Medicine, Tongji University Cancer Center, Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, National Clnical Research Center of Interventional Medicine, Shanghai, 200072, PR China
| | - Liang Chen
- Department of Gastroenterology, Shanghai Tenth People's Hospital, Tongji University School of Medicine, Shanghai, 200072, PR China
| | - Wen-Wen Yue
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, Ultrasound Research and Education Institute, Tongji University School of Medicine, Tongji University Cancer Center, Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, National Clnical Research Center of Interventional Medicine, Shanghai, 200072, PR China.
| | - Hui-Xiong Xu
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, Ultrasound Research and Education Institute, Tongji University School of Medicine, Tongji University Cancer Center, Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, National Clnical Research Center of Interventional Medicine, Shanghai, 200072, PR China.
| |
Collapse
|
16
|
Automated ultrasound assessment of amniotic fluid index using deep learning. Med Image Anal 2021; 69:101951. [PMID: 33515982 DOI: 10.1016/j.media.2020.101951] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2020] [Revised: 12/13/2020] [Accepted: 12/21/2020] [Indexed: 12/19/2022]
Abstract
The estimation of antenatal amniotic fluid (AF) volume (AFV) is important as it offers crucial information about fetal development, fetal well-being, and perinatal prognosis. However, AFV measurement is cumbersome and patient specific. Moreover, it is heavily sonographer-dependent, with measurement accuracy varying greatly depending on the sonographer's experience. Therefore, the development of accurate, robust, and adoptable methods to evaluate AFV is highly desirable. In this regard, automation is expected to reduce user-based variability and workload of sonographers. However, automating AFV measurement is very challenging, because accurate detection of AF pockets is difficult owing to various confusing factors, such as reverberation artifact, AF mimicking region and floating matter. Furthermore, AF pocket exhibits an unspecified variety of shapes and sizes, and ultrasound images often show missing or incomplete structural boundaries. To overcome the abovementioned difficulties, we develop a hierarchical deep-learning-based method, which consider clinicians' anatomical-knowledge-based approaches. The key step is the segmentation of the AF pocket using our proposed deep learning network, AF-net. AF-net is a variation of U-net combined with three complementary concepts - atrous convolution, multi-scale side-input layer, and side-output layer. The experimental results demonstrate that the proposed method provides a measurement of the amniotic fluid index (AFI) that is as robust and precise as the results from clinicians. The proposed method achieved a Dice similarity of 0.877±0.086 for AF segmentation and achieved a mean absolute error of 2.666±2.986 and mean relative error of 0.018±0.023 for AFI value. To the best of our knowledge, our method, for the first time, provides an automated measurement of AFI.
Collapse
|
17
|
Pandey PU, Quader N, Guy P, Garbi R, Hodgson AJ. Ultrasound Bone Segmentation: A Scoping Review of Techniques and Validation Practices. ULTRASOUND IN MEDICINE & BIOLOGY 2020; 46:921-935. [PMID: 31982208 DOI: 10.1016/j.ultrasmedbio.2019.12.014] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/23/2019] [Revised: 12/04/2019] [Accepted: 12/11/2019] [Indexed: 06/10/2023]
Abstract
Ultrasound bone segmentation is an important yet challenging task for many clinical applications. Several works have emerged attempting to improve and automate bone segmentation, which has led to a variety of computational techniques, validation practices and applied clinical scenarios. We characterize this exciting and growing body of research by reviewing published ultrasound bone segmentation techniques. We review 56 articles in detail and categorize and discuss the image analysis techniques that have been used for bone segmentation. We highlight the general trends of this field in terms of clinical motivation, image analysis techniques, ultrasound modalities and the types of validation practices used to quantify segmentation performance. Finally, we present an outlook on promising areas of research based on the unaddressed needs for solving ultrasound bone segmentation.
Collapse
Affiliation(s)
- Prashant U Pandey
- Biomedical Engineering Department, University of British Columbia, Vacouver, British Columbia, Canada.
| | - Niamul Quader
- Electrical and Computer Engineering Department, University of British Columbia, Vacouver, British Columbia, Canada
| | - Pierre Guy
- Department of Orthopaedics, University of British Columbia, Vacouver, British Columbia, Canada
| | - Rafeef Garbi
- Electrical and Computer Engineering Department, University of British Columbia, Vacouver, British Columbia, Canada
| | - Antony J Hodgson
- Mechanical Engineering Department, University of British Columbia, Vacouver, British Columbia, Canada
| |
Collapse
|
18
|
Garcia-Canadilla P, Sanchez-Martinez S, Crispi F, Bijnens B. Machine Learning in Fetal Cardiology: What to Expect. Fetal Diagn Ther 2020; 47:363-372. [PMID: 31910421 DOI: 10.1159/000505021] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2019] [Accepted: 11/25/2019] [Indexed: 11/19/2022]
Abstract
In fetal cardiology, imaging (especially echocardiography) has demonstrated to help in the diagnosis and monitoring of fetuses with a compromised cardiovascular system potentially associated with several fetal conditions. Different ultrasound approaches are currently used to evaluate fetal cardiac structure and function, including conventional 2-D imaging and M-mode and tissue Doppler imaging among others. However, assessment of the fetal heart is still challenging mainly due to involuntary movements of the fetus, the small size of the heart, and the lack of expertise in fetal echocardiography of some sonographers. Therefore, the use of new technologies to improve the primary acquired images, to help extract measurements, or to aid in the diagnosis of cardiac abnormalities is of great importance for optimal assessment of the fetal heart. Machine leaning (ML) is a computer science discipline focused on teaching a computer to perform tasks with specific goals without explicitly programming the rules on how to perform this task. In this review we provide a brief overview on the potential of ML techniques to improve the evaluation of fetal cardiac function by optimizing image acquisition and quantification/segmentation, as well as aid in improving the prenatal diagnoses of fetal cardiac remodeling and abnormalities.
Collapse
Affiliation(s)
- Patricia Garcia-Canadilla
- Institut d'Investigacions Biomèdiques August Pi i Sunyer, Barcelona, Spain, .,Institute of Cardiovascular Science, University College London, London, United Kingdom,
| | | | - Fatima Crispi
- Institut d'Investigacions Biomèdiques August Pi i Sunyer, Barcelona, Spain.,Fetal Medicine Research Center, BCNatal - Barcelona Center for Maternal-Fetal and Neonatal Medicine (Hospital Clínic and Hospital Sant Joan de Déu), Institut Clínic de Ginecologia Obstetricia i Neonatologia, Centre for Biomedical Research on Rare Diseases (CIBER-ER), Barcelona, Spain
| | - Bart Bijnens
- Institut d'Investigacions Biomèdiques August Pi i Sunyer, Barcelona, Spain.,Department of Cardiovascular Sciences, KU Leuven, Leuven, Belgium.,ICREA, Barcelona, Spain
| |
Collapse
|
19
|
Ambroise Grandjean G, Hossu G, Banasiak C, Ciofolo-Veit C, Raynaud C, Rouet L, Morel O, Beaumont M. Optimization of Fetal Biometry With 3D Ultrasound and Image Recognition (EPICEA): protocol for a prospective cross-sectional study. BMJ Open 2019; 9:e031777. [PMID: 31843832 PMCID: PMC6924693 DOI: 10.1136/bmjopen-2019-031777] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
CONTEXT Variability in 2D ultrasound (US) is related to the acquisition of planes of reference and the positioning of callipers and could be reduced in combining US volume acquisitions and anatomical structures recognition. OBJECTIVES The primary objective is to assess the consistency between 3D measurements (automated and manual) extracted from a fetal US volume with standard 2D US measurements (I). Secondary objectives are to evaluate the feasibility of the use of software to obtain automated measurements of the fetal head, abdomen and femur from US acquisitions (II) and to assess the impact of automation on intraobserver and interobserver reproducibility (III). METHODS AND ANALYSIS 225 fetuses will be measured at 16-30 weeks of gestation. For each fetus, six volumes (two for head, abdomen and thigh, respectively) will be prospectively acquired after performing standard 2D biometry measurements (head and abdominal circumference, femoral length). Each volume will be processed later by both a software and an operator to extract the reference planes and to perform the corresponding measurements. The different sets of measurements will be compared using Bland-Altman plots to assess the agreement between the different processes (I). The feasibility of using the software in clinical practice will be assessed through the failure rate of processing and the score of quality of measurements (II). Interclass correlation coefficients will be used to evaluate the intraobserver and interobserver reproducibility (III). ETHICS AND DISSEMINATION The study and related consent forms were approved by an institutional review board (CPP SUD-EST 3) on 2 October 2018, under reference number 2018-033 B. The study has been registered in https://clinicaltrials.gov registry on 23 January 2019, under the number NCT03812471. This study will enable an improved understanding and dissemination of the potential benefits of 3D automated measurements and is a prerequisite for the design of intention to treat randomised studies assessing their impact. TRIAL REGISTRATION NUMBER NCT03812471; Pre-results.
Collapse
Affiliation(s)
- Gaëlle Ambroise Grandjean
- Obstetrics Department, CHRU Nancy, Nancy, Lorraine, France
- Midwifery Department, Université de Lorraine, Nancy, France
- Inserm IADI, Université de Lorraine, Nancy, France
| | - Gabriela Hossu
- CIC-IT, CHRU Nancy, Université de Lorraine, Nancy, France
| | | | | | | | | | - Olivier Morel
- Obstetrics Department, CHRU Nancy, Nancy, Lorraine, France
- Inserm IADI, Université de Lorraine, Nancy, France
| | | |
Collapse
|