1
|
Qiu Y, Xie Z, Jiang Y, Ma J. Segment anything with inception module for automated segmentation of endometrium in ultrasound images. J Med Imaging (Bellingham) 2024; 11:034504. [PMID: 38827779 PMCID: PMC11137375 DOI: 10.1117/1.jmi.11.3.034504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2024] [Revised: 05/10/2024] [Accepted: 05/14/2024] [Indexed: 06/05/2024] Open
Abstract
Purpose Accurate segmentation of the endometrium in ultrasound images is essential for gynecological diagnostics and treatment planning. Manual segmentation methods are time-consuming and subjective, prompting the exploration of automated solutions. We introduce "segment anything with inception module" (SAIM), a specialized adaptation of the segment anything model, tailored specifically for the segmentation of endometrium structures in ultrasound images. Approach SAIM incorporates enhancements to the image encoder structure and integrates point prompts to guide the segmentation process. We utilized ultrasound images from patients undergoing hysteroscopic surgery in the gynecological department to train and evaluate the model. Results Our study demonstrates SAIM's superior segmentation performance through quantitative and qualitative evaluations, surpassing existing automated methods. SAIM achieves a dice similarity coefficient of 76.31% and an intersection over union score of 63.71%, outperforming traditional task-specific deep learning models and other SAM-based foundation models. Conclusions The proposed SAIM achieves high segmentation accuracy, providing high diagnostic precision and efficiency. Furthermore, it is potentially an efficient tool for junior medical professionals in education and diagnosis.
Collapse
Affiliation(s)
- Yang Qiu
- Beijing Zhongguancun Hospital, Beijing, China
| | - Zhun Xie
- Beihang University, School of Instrumentation and Opto-electric Engineering, Beijing, China
| | | | - Jianguo Ma
- Beihang University, School of Instrumentation and Opto-electric Engineering, Beijing, China
| |
Collapse
|
2
|
Shui Y, Wang Z, Liu B, Wang W, Fu S, Li Y. A three-path network with multi-scale selective feature fusion, edge-inspiring and edge-guiding for liver tumor segmentation. Comput Biol Med 2024; 168:107841. [PMID: 38081117 DOI: 10.1016/j.compbiomed.2023.107841] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2023] [Revised: 11/04/2023] [Accepted: 12/07/2023] [Indexed: 01/10/2024]
Abstract
Automatic liver tumor segmentation is one of the most important tasks in computer-aided diagnosis and treatment. Deep learning techniques have gained increasing popularity for medical image segmentation in recent years. However, due to the various shapes, sizes, and obscure boundaries of tumors, it is still difficult to automatically extract tumor regions from CT images. Based on the complementarity of edge detection and region segmentation, a three-path structure with multi-scale selective feature fusion (MSFF) module, multi-channel feature fusion (MFF) module, edge-inspiring (EI) module, and edge-guiding (EG) module is proposed in this paper. The MSFF module includes the process of generation, fusion, and selection of multi-scale features, which can adaptively correct the response weights in multiple branches to filter redundant information. The MFF module integrates richer hierarchical features to capture targets at different scales. The EI module aggregates high-level semantic information at different levels to obtain fine edge semantics, which is injected into the EG module for representation learning of segmentation features. Experiments on the LiTs2017 dataset show that our proposed method achieves a Dice index of 85.55% and a Jaccard index of 81.11%, which are higher than what can be obtained by the current state-of-the-art methods. Cross-dataset validation experiments conducted on 3Dircadb and Clinical datasets show the generalization and robustness of the proposed method by achieving dice indices of 80.14% and 81.68%, respectively.
Collapse
Affiliation(s)
- Yuanyuan Shui
- School of Mathematics, Shandong University, Jinan, 250100, China
| | - Zhendong Wang
- School of Mathematics, Shandong University, Jinan, 250100, China
| | - Bin Liu
- Department of Intervention Medicine, The Second Hospital of Shandong University, Jinan, 250033, China
| | - Wei Wang
- Department of Intervention Medicine, The Second Hospital of Shandong University, Jinan, 250033, China
| | - Shujun Fu
- School of Mathematics, Shandong University, Jinan, 250100, China; Department of Intervention Medicine, The Second Hospital of Shandong University, Jinan, 250033, China.
| | - Yuliang Li
- Department of Intervention Medicine, The Second Hospital of Shandong University, Jinan, 250033, China.
| |
Collapse
|
3
|
Jost E, Kosian P, Jimenez Cruz J, Albarqouni S, Gembruch U, Strizek B, Recker F. Evolving the Era of 5D Ultrasound? A Systematic Literature Review on the Applications for Artificial Intelligence Ultrasound Imaging in Obstetrics and Gynecology. J Clin Med 2023; 12:6833. [PMID: 37959298 PMCID: PMC10649694 DOI: 10.3390/jcm12216833] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 10/17/2023] [Accepted: 10/25/2023] [Indexed: 11/15/2023] Open
Abstract
Artificial intelligence (AI) has gained prominence in medical imaging, particularly in obstetrics and gynecology (OB/GYN), where ultrasound (US) is the preferred method. It is considered cost effective and easily accessible but is time consuming and hindered by the need for specialized training. To overcome these limitations, AI models have been proposed for automated plane acquisition, anatomical measurements, and pathology detection. This study aims to overview recent literature on AI applications in OB/GYN US imaging, highlighting their benefits and limitations. For the methodology, a systematic literature search was performed in the PubMed and Cochrane Library databases. Matching abstracts were screened based on the PICOS (Participants, Intervention or Exposure, Comparison, Outcome, Study type) scheme. Articles with full text copies were distributed to the sections of OB/GYN and their research topics. As a result, this review includes 189 articles published from 1994 to 2023. Among these, 148 focus on obstetrics and 41 on gynecology. AI-assisted US applications span fetal biometry, echocardiography, or neurosonography, as well as the identification of adnexal and breast masses, and assessment of the endometrium and pelvic floor. To conclude, the applications for AI-assisted US in OB/GYN are abundant, especially in the subspecialty of obstetrics. However, while most studies focus on common application fields such as fetal biometry, this review outlines emerging and still experimental fields to promote further research.
Collapse
Affiliation(s)
- Elena Jost
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Philipp Kosian
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Jorge Jimenez Cruz
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Shadi Albarqouni
- Department of Diagnostic and Interventional Radiology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
- Helmholtz AI, Helmholtz Munich, Ingolstädter Landstraße 1, 85764 Neuherberg, Germany
| | - Ulrich Gembruch
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Brigitte Strizek
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Florian Recker
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| |
Collapse
|
4
|
Gao J, Lao Q, Liu P, Yi H, Kang Q, Jiang Z, Wu X, Li K, Chen Y, Zhang L. Anatomically Guided Cross-Domain Repair and Screening for Ultrasound Fetal Biometry. IEEE J Biomed Health Inform 2023; 27:4914-4925. [PMID: 37486830 DOI: 10.1109/jbhi.2023.3298096] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/26/2023]
Abstract
Ultrasound based estimation of fetal biometry is extensively used to diagnose prenatal abnormalities and to monitor fetal growth, for which accurate segmentation of the fetal anatomy is a crucial prerequisite. Although deep neural network-based models have achieved encouraging results on this task, inevitable distribution shifts in ultrasound images can still result in severe performance drop in real world deployment scenarios. In this article, we propose a complete ultrasound fetal examination system to deal with this troublesome problem by repairing and screening the anatomically implausible results. Our system consists of three main components: A routine segmentation network, a fetal anatomical key points guided repair network, and a shape-coding based selective screener. Guided by the anatomical key points, our repair network has stronger cross-domain repair capabilities, which can substantially improve the outputs of the segmentation network. By quantifying the distance between an arbitrary segmentation mask to its corresponding anatomical shape class, the proposed shape-coding based selective screener can then effectively reject the entire implausible results that cannot be fully repaired. Extensive experiments demonstrate that our proposed framework has strong anatomical guarantee and outperforms other methods in three different cross-domain scenarios.
Collapse
|
5
|
GLAN: GAN Assisted Lightweight Attention Network for Biomedical Imaging Based Diagnostics. Cognit Comput 2023. [DOI: 10.1007/s12559-023-10131-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/09/2023]
|
6
|
Chen Z, Wang Z, Du M, Liu Z. Artificial Intelligence in the Assessment of Female Reproductive Function Using Ultrasound: A Review. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2022; 41:1343-1353. [PMID: 34524706 PMCID: PMC9292970 DOI: 10.1002/jum.15827] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Revised: 08/03/2021] [Accepted: 08/16/2021] [Indexed: 05/27/2023]
Abstract
The incidence of infertility is continuously increasing nearly all over the world in recent years, and novel methods for accurate assessment are of great need. Artificial Intelligence (AI) has gradually become an effective supplementary method for the assessment of female reproductive function. It has been used in clinical follicular monitoring, optimum timing for transplantation, and prediction of pregnancy outcome. Some literatures summarize the use of AI in this field, but few of them focus on the assessment of female reproductive function by AI-aided ultrasound. In this review, we mainly discussed the applicability, feasibility, and value of clinical application of AI in ultrasound to monitor follicles, assess endometrial receptivity, and predict the pregnancy outcome of in vitro fertilization and embryo transfer (IVF-ET). The limitations, challenges, and future trends of ultrasound combined with AI in providing efficient and individualized evaluation of female reproductive function had also been mentioned.
Collapse
Affiliation(s)
- Zhiyi Chen
- The First Affiliated Hospital, Medical Imaging Center, Hengyang Medical SchoolUniversity of South ChinaHengyangChina
- Institute of Medical ImagingUniversity of South ChinaHengyangChina
| | - Ziyao Wang
- The First Affiliated Hospital, Medical Imaging Center, Hengyang Medical SchoolUniversity of South ChinaHengyangChina
| | - Meng Du
- Institute of Medical ImagingUniversity of South ChinaHengyangChina
| | - Zhenyu Liu
- The First Affiliated Hospital, Medical Imaging Center, Hengyang Medical SchoolUniversity of South ChinaHengyangChina
| |
Collapse
|
7
|
CGRNet: Contour-guided graph reasoning network for ambiguous biomedical image segmentation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103621] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
8
|
Liu Y, Zhou Q, Peng B, Jiang J, Fang L, Weng W, Wang W, Wang S, Zhu X. Automatic Measurement of Endometrial Thickness From Transvaginal Ultrasound Images. Front Bioeng Biotechnol 2022; 10:853845. [PMID: 35425763 PMCID: PMC9001908 DOI: 10.3389/fbioe.2022.853845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Accepted: 02/21/2022] [Indexed: 11/25/2022] Open
Abstract
Purpose: Endometrial thickness is one of the most important indicators in endometrial disease screening and diagnosis. Herein, we propose a method for automated measurement of endometrial thickness from transvaginal ultrasound images. Methods: Accurate automated measurement of endometrial thickness relies on endometrium segmentation from transvaginal ultrasound images that usually have ambiguous boundaries and heterogeneous textures. Therefore, a two-step method was developed for automated measurement of endometrial thickness. First, a semantic segmentation method was developed based on deep learning, to segment the endometrium from 2D transvaginal ultrasound images. Second, we estimated endometrial thickness from the segmented results, using a largest inscribed circle searching method. Overall, 8,119 images (size: 852 × 1136 pixels) from 467 cases were used to train and validate the proposed method. Results: We achieved an average Dice coefficient of 0.82 for endometrium segmentation using a validation dataset of 1,059 images from 71 cases. With validation using 3,210 images from 214 cases, 89.3% of endometrial thickness errors were within the clinically accepted range of ±2 mm. Conclusion: Endometrial thickness can be automatically and accurately estimated from transvaginal ultrasound images for clinical screening and diagnosis.
Collapse
Affiliation(s)
- Yiyang Liu
- Biomedical Information Engineering Lab, The University of Aizu, Aizuwakamatsu, Japan
| | - Qin Zhou
- Department of Obstetrics and Gynecology, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, China
| | - Boyuan Peng
- Biomedical Information Engineering Lab, The University of Aizu, Aizuwakamatsu, Japan
| | - Jingjing Jiang
- Department of Obstetrics and Gynecology, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, China
| | - Li Fang
- Department of Obstetrics and Gynecology, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, China
| | - Weihao Weng
- Biomedical Information Engineering Lab, The University of Aizu, Aizuwakamatsu, Japan
| | - Wenwen Wang
- Department of Obstetrics and Gynecology, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, China
- *Correspondence: Wenwen Wang, ; Shixuan Wang, ; Xin Zhu,
| | - Shixuan Wang
- Department of Obstetrics and Gynecology, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, China
- *Correspondence: Wenwen Wang, ; Shixuan Wang, ; Xin Zhu,
| | - Xin Zhu
- Biomedical Information Engineering Lab, The University of Aizu, Aizuwakamatsu, Japan
- *Correspondence: Wenwen Wang, ; Shixuan Wang, ; Xin Zhu,
| |
Collapse
|
9
|
Athavale AM, Hart PD, Itteera M, Cimbaluk D, Patel T, Alabkaa A, Arruda J, Singh A, Rosenberg A, Kulkarni H. Development and Validation of a Deep Learning Model to Quantify Interstitial Fibrosis and Tubular Atrophy From Kidney Ultrasonography Images. JAMA Netw Open 2021; 4:e2111176. [PMID: 34028548 PMCID: PMC8144924 DOI: 10.1001/jamanetworkopen.2021.11176] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/15/2023] Open
Abstract
IMPORTANCE Interstitial fibrosis and tubular atrophy (IFTA) is a strong indicator of decline in kidney function and is measured using histopathological assessment of kidney biopsy core. At present, a noninvasive test to assess IFTA is not available. OBJECTIVE To develop and validate a deep learning (DL) algorithm to quantify IFTA from kidney ultrasonography images. DESIGN, SETTING, AND PARTICIPANTS This was a single-center diagnostic study of consecutive patients who underwent native kidney biopsy at John H. Stroger Jr. Hospital of Cook County, Chicago, Illinois, between January 1, 2014, and December 31, 2018. A DL algorithm was trained, validated, and tested to classify IFTA from kidney ultrasonography images. Of 6135 Crimmins-filtered ultrasonography images, 5523 were used for training (5122 images) and validation (401 images), and 612 were used to test the accuracy of the DL system. Kidney segmentation was performed using the UNet architecture, and classification was performed using a convolution neural network-based feature extractor and extreme gradient boosting. IFTA scored by a nephropathologist on trichrome stained kidney biopsy slide was used as the reference standard. IFTA was divided into 4 grades (grade 1, 0%-24%; grade 2, 25%-49%; grade 3, 50%-74%; and grade 4, 75%-100%). Data analysis was performed from December 2019 to May 2020. MAIN OUTCOMES AND MEASURES Prediction of IFTA grade was measured using the metrics precision, recall, accuracy, and F1 score. RESULTS This study included 352 patients (mean [SD] age 47.43 [14.37] years), of whom 193 (54.82%) were women. There were 159 patients with IFTA grade 1 (2701 ultrasonography images), 74 patients with IFTA grade 2 (1239 ultrasonography images), 41 patients with IFTA grade 3 (701 ultrasonography images), and 78 patients with IFTA grade 4 (1494 ultrasonography images). Kidney ultrasonography images were segmented with 91% accuracy. In the independent test set, the point estimates for performance matrices showed precision of 0.8927 (95% CI, 0.8682-0.9172), recall of 0.8037 (95% CI, 0.7722-0.8352), accuracy of 0.8675 (95% CI, 0.8406-0.8944), and an F1 score of 0.8389 (95% CI, 0.8098-0.8680) at the image level. Corresponding estimates at the patient level were precision of 0.9003 (95% CI, 0.8644-0.9362), recall of 0.8421 (95% CI, 0.7984-0.8858), accuracy of 0.8955 (95% CI, 0.8589-0.9321), and an F1 score of 0.8639 (95% CI, 0.8228-0.9049). Accuracy at the patient level was highest for IFTA grade 1 and IFTA grade 4. The accuracy (approximately 90%) remained high irrespective of the timing of ultrasonography studies and the biopsy diagnosis. The predictive performance of the DL system did not show significant improvement when combined with baseline clinical characteristics. CONCLUSIONS AND RELEVANCE These findings suggest that a DL algorithm can accurately and independently predict IFTA from kidney ultrasonography images.
Collapse
Affiliation(s)
- Ambarish M. Athavale
- Division of Nephrology, Department of Medicine, Cook County Health, Chicago, Illinois
| | - Peter D. Hart
- Division of Nephrology, Department of Medicine, Cook County Health, Chicago, Illinois
| | - Mathew Itteera
- Division of Nephrology, Department of Medicine, Cook County Health, Chicago, Illinois
| | - David Cimbaluk
- Department of Pathology, Rush University Medical Center, Chicago, Illinois
| | - Tushar Patel
- Department of Pathology, University of Illinois at Chicago, Chicago
| | - Anas Alabkaa
- Department of Pathology, Rush University Medical Center, Chicago, Illinois
| | - Jose Arruda
- Division of Nephrology, University of Illinois at Chicago, Chicago
| | - Ashok Singh
- Division of Nephrology, Department of Medicine, Cook County Health, Chicago, Illinois
| | - Avi Rosenberg
- Department of Pathology, Johns Hopkins University, Baltimore, Maryland
| | | |
Collapse
|
10
|
Yi J, Kang HK, Kwon JH, Kim KS, Park MH, Seong YK, Kim DW, Ahn B, Ha K, Lee J, Hah Z, Bang WC. Technology trends and applications of deep learning in ultrasonography: image quality enhancement, diagnostic support, and improving workflow efficiency. Ultrasonography 2020; 40:7-22. [PMID: 33152846 PMCID: PMC7758107 DOI: 10.14366/usg.20102] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2020] [Accepted: 09/14/2020] [Indexed: 12/12/2022] Open
Abstract
In this review of the most recent applications of deep learning to ultrasound imaging, the architectures of deep learning networks are briefly explained for the medical imaging applications of classification, detection, segmentation, and generation. Ultrasonography applications for image processing and diagnosis are then reviewed and summarized, along with some representative imaging studies of the breast, thyroid, heart, kidney, liver, and fetal head. Efforts towards workflow enhancement are also reviewed, with an emphasis on view recognition, scanning guide, image quality assessment, and quantification and measurement. Finally some future prospects are presented regarding image quality enhancement, diagnostic support, and improvements in workflow efficiency, along with remarks on hurdles, benefits, and necessary collaborations.
Collapse
Affiliation(s)
- Jonghyon Yi
- Ultrasound R&D Group, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Ho Kyung Kang
- Ultrasound R&D Group, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Jae-Hyun Kwon
- DR Imaging R&D Lab, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Kang-Sik Kim
- Ultrasound R&D Group, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Moon Ho Park
- Ultrasound R&D Group, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Yeong Kyeong Seong
- Ultrasound R&D Group, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Dong Woo Kim
- Product Strategy Group, Samsung Medison Co., Ltd., Seongnam, Korea
| | - Byungeun Ahn
- Product Strategy Group, Samsung Medison Co., Ltd., Seongnam, Korea
| | - Kilsu Ha
- Product Strategy Group, Samsung Medison Co., Ltd., Seongnam, Korea
| | - Jinyong Lee
- System R&D Group, Samsung Medison Co., Ltd., Seongnam, Korea
| | - Zaegyoo Hah
- System R&D Group, Samsung Medison Co., Ltd., Seongnam, Korea
| | - Won-Chul Bang
- Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seoul, Korea.,Product Strategy Team, Samsung Medison Co., Ltd., Seoul, Korea
| |
Collapse
|