1
|
Wang Y, Xu Z, Dan R, Yao C, Shao J, Sun Y, Wang Y, Ye J. Automated classification of multiple ophthalmic diseases using ultrasound images by deep learning. Br J Ophthalmol 2024; 108:999-1004. [PMID: 37852741 DOI: 10.1136/bjo-2022-322953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Accepted: 09/03/2023] [Indexed: 10/20/2023]
Abstract
BACKGROUND Ultrasound imaging is suitable for detecting and diagnosing ophthalmic abnormalities. However, a shortage of experienced sonographers and ophthalmologists remains a problem. This study aims to develop a multibranch transformer network (MBT-Net) for the automated classification of multiple ophthalmic diseases using B-mode ultrasound images. METHODS Ultrasound images with six clinically confirmed categories, including normal, retinal detachment, vitreous haemorrhage, intraocular tumour, posterior scleral staphyloma and other abnormalities, were used to develop and evaluate the MBT-Net. Images were derived from five different ultrasonic devices operated by different sonographers and divided into training set, validation set, internal testing set and temporal external testing set. Two senior ophthalmologists and two junior ophthalmologists were recruited to compare the model's performance. RESULTS A total of 10 184 ultrasound images were collected. The MBT-Net got an accuracy of 87.80% (95% CI 86.26% to 89.18%) in the internal testing set, which was significantly higher than junior ophthalmologists (95% CI 67.37% to 79.16%; both p<0.05) and lower than senior ophthalmologists (95% CI 89.45% to 92.61%; both p<0.05). The micro-average area under the curve of the six-category classification was 0.98. With reference to comprehensive clinical diagnosis, the measurements of agreement were almost perfect in the MBT-Net (kappa=0.85, p<0.05). There was no significant difference in the accuracy of the MBT-Net across five ultrasonic devices (p=0.27). The MBT-Net got an accuracy of 82.21% (95% CI 78.45% to 85.44%) in the temporal external testing set. CONCLUSIONS The MBT-Net showed high accuracy for screening and diagnosing multiple ophthalmic diseases using only ultrasound images across mutioperators and mutidevices.
Collapse
Affiliation(s)
- Yijie Wang
- Department of Ophthalmology, the Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Zihao Xu
- Microelectronics CAD Center, Hangzhou Dianzi University, Hangzhou, China
| | - Ruilong Dan
- Microelectronics CAD Center, Hangzhou Dianzi University, Hangzhou, China
| | - Chunlei Yao
- Department of Ophthalmology, the Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Ji Shao
- Department of Ophthalmology, the Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Yiming Sun
- Department of Ophthalmology, the Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Yaqi Wang
- College of Media Engineering, Communication University of Zhejiang, Hangzhou, China
| | - Juan Ye
- Department of Ophthalmology, the Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| |
Collapse
|
2
|
Zhang J, Dawkins A. Artificial Intelligence in Ultrasound Imaging: Where Are We Now? Ultrasound Q 2024; 40:93-97. [PMID: 38842384 DOI: 10.1097/ruq.0000000000000680] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/07/2024]
Affiliation(s)
- Jie Zhang
- From the Department of Radiology, University of Kentucky, Lexington, KY
| | | |
Collapse
|
3
|
Seoni S, Shahini A, Meiburger KM, Marzola F, Rotunno G, Acharya UR, Molinari F, Salvi M. All you need is data preparation: A systematic review of image harmonization techniques in Multi-center/device studies for medical support systems. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 250:108200. [PMID: 38677080 DOI: 10.1016/j.cmpb.2024.108200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/27/2024] [Revised: 04/20/2024] [Accepted: 04/22/2024] [Indexed: 04/29/2024]
Abstract
BACKGROUND AND OBJECTIVES Artificial intelligence (AI) models trained on multi-centric and multi-device studies can provide more robust insights and research findings compared to single-center studies. However, variability in acquisition protocols and equipment can introduce inconsistencies that hamper the effective pooling of multi-source datasets. This systematic review evaluates strategies for image harmonization, which standardizes appearances to enable reliable AI analysis of multi-source medical imaging. METHODS A literature search using PRISMA guidelines was conducted to identify relevant papers published between 2013 and 2023 analyzing multi-centric and multi-device medical imaging studies that utilized image harmonization approaches. RESULTS Common image harmonization techniques included grayscale normalization (improving classification accuracy by up to 24.42 %), resampling (increasing the percentage of robust radiomics features from 59.5 % to 89.25 %), and color normalization (enhancing AUC by up to 0.25 in external test sets). Initially, mathematical and statistical methods dominated, but machine and deep learning adoption has risen recently. Color imaging modalities like digital pathology and dermatology have remained prominent application areas, though harmonization efforts have expanded to diverse fields including radiology, nuclear medicine, and ultrasound imaging. In all the modalities covered by this review, image harmonization improved AI performance, with increasing of up to 24.42 % in classification accuracy and 47 % in segmentation Dice scores. CONCLUSIONS Continued progress in image harmonization represents a promising strategy for advancing healthcare by enabling large-scale, reliable analysis of integrated multi-source datasets using AI. Standardizing imaging data across clinical settings can help realize personalized, evidence-based care supported by data-driven technologies while mitigating biases associated with specific populations or acquisition protocols.
Collapse
Affiliation(s)
- Silvia Seoni
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Alen Shahini
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Kristen M Meiburger
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Francesco Marzola
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Giulia Rotunno
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - U Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield, Australia; Centre for Health Research, University of Southern Queensland, Australia
| | - Filippo Molinari
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Massimo Salvi
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy.
| |
Collapse
|
4
|
Sun G, Cai L, Yan X, Nie W, Liu X, Xu J, Zou X. A prediction model based on digital breast pathology image information. PLoS One 2024; 19:e0294923. [PMID: 38758814 PMCID: PMC11101065 DOI: 10.1371/journal.pone.0294923] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Accepted: 11/11/2023] [Indexed: 05/19/2024] Open
Abstract
BACKGROUND The workload of breast cancer pathological diagnosis is very heavy. The purpose of this study is to establish a nomogram model based on pathological images to predict the benign and malignant nature of breast diseases and to validate its predictive performance. METHODS In retrospect, a total of 2,723 H&E-stained pathological images were collected from 1,474 patients at Qingdao Central Hospital between 2019 and 2022. The dataset consisted of 509 benign tumor images (adenosis and fibroadenoma) and 2,214 malignant tumor images (infiltrating ductal carcinoma). The images were divided into a training set (1,907) and a validation set (816). Python3.7 was used to extract the values of the R channel, G channel, B channel, and one-dimensional information entropy from all images. Multivariable logistic regression was used to select variables and establish the breast tissue pathological image prediction model. RESULTS The R channel value, B channel value, and one-dimensional information entropy of the images were identified as independent predictive factors for the classification of benign and malignant pathological images (P < 0.05). The area under the curve (AUC) of the nomogram model in the training set was 0.889 (95% CI: 0.869, 0.909), and the AUC in the validation set was 0.838 (95% CI: 0.7980.877). The calibration curve results showed that the calibration curve of this nomogram model was close to the ideal curve. The decision curve results indicated that the predictive model curve had a high value for auxiliary diagnosis. CONCLUSION The nomogram model for the prediction of benign and malignant breast diseases based on pathological images demonstrates good predictive performance. This model can assist in the diagnosis of breast tissue pathological images.
Collapse
Affiliation(s)
- Guoxin Sun
- School of Clinical Medicine, Qingdao University, Qingdao, China
| | - Liying Cai
- College of Nursing and Rehabilitation, North China University of Science and Technology, Tangshan City, China
| | - Xiong Yan
- Department of Pathology, Qingdao Central Hospital, Qingdao, China
| | - Weihong Nie
- School of Clinical Medicine, Qingdao University, Qingdao, China
| | - Xin Liu
- School of Clinical Medicine, Qingdao University, Qingdao, China
| | - Jing Xu
- Department of Pathology, Qingdao Central Hospital, Qingdao, China
| | - Xiao Zou
- Department of Breast Surgery, Xiangdong Hospital Affiliated to Hunan Normal University, Hunan, China
| |
Collapse
|
5
|
Ma S, Li Y, Yin J, Niu Q, An Z, Du L, Li F, Gu J. Prospective study of AI-assisted prediction of breast malignancies in physical health examinations: role of off-the-shelf AI software and comparison to radiologist performance. Front Oncol 2024; 14:1374278. [PMID: 38756651 PMCID: PMC11096442 DOI: 10.3389/fonc.2024.1374278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2024] [Accepted: 04/19/2024] [Indexed: 05/18/2024] Open
Abstract
Objective In physical health examinations, breast sonography is a commonly used imaging method, but it can lead to repeated exams and unnecessary biopsy due to discrepancies among radiologists and health centers. This study explores the role of off-the-shelf artificial intelligence (AI) software in assisting radiologists to classify incidentally found breast masses in two health centers. Methods Female patients undergoing breast ultrasound examinations with incidentally discovered breast masses were categorized according to the 5th edition of the Breast Imaging Reporting and Data System (BI-RADS), with categories 3 to 5 included in this study. The examinations were conducted at two municipal health centers from May 2021 to May 2023.The final pathological results from surgical resection or biopsy served as the gold standard for comparison. Ultrasonographic images were obtained in longitudinal and transverse sections, and two junior radiologists and one senior radiologist independently assessed the images without knowing the pathological findings. The BI-RADS classification was adjusted following AI assistance, and diagnostic performance was compared using receiver operating characteristic curves. Results A total of 196 patients with 202 breast masses were included in the study, with pathological results confirming 107 benign and 95 malignant masses. The receiver operating characteristic curve showed that experienced breast radiologists had higher diagnostic performance in BI-RADS classification than junior radiologists, similar to AI classification (AUC = 0.936, 0.806, 0.896, and 0.950, p < 0.05). The AI software improved the accuracy, sensitivity, and negative predictive value of the adjusted BI-RADS classification for the junior radiologists' group (p< 0.05), while no difference was observed in the senior radiologist group. Furthermore, AI increased the negative predictive value for BI-RADS 4a masses and the positive predictive value for 4b masses among radiologists (p < 0.05). AI enhances the sensitivity of invasive breast cancer detection more effectively than ductal carcinoma in situ and rare subtypes of breast cancer. Conclusions The AI software enhances diagnostic efficiency for breast masses, reducing the performance gap between junior and senior radiologists, particularly for BI-RADS 4a and 4b masses. This improvement reduces unnecessary repeat examinations and biopsies, optimizing medical resource utilization and enhancing overall diagnostic effectiveness.
Collapse
Affiliation(s)
- Sai Ma
- Department of Ultrasound, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Yanfang Li
- Department of Ultrasound, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Jun Yin
- Department of Ultrasound, Shanghai Fourth People’s Hospital, Shanghai, China
| | - Qinghua Niu
- Department of Ultrasound, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Zichen An
- Department of Ultrasound, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Lianfang Du
- Department of Ultrasound, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Fan Li
- Department of Ultrasound, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Jiying Gu
- Department of Ultrasound, Shidong Hospital, Yangpu District, Shanghai, China
| |
Collapse
|
6
|
Zhang D, Zhang XY, Lu WW, Liao JT, Zhang CX, Tang Q, Cui XW. Predicting Ki-67 expression in hepatocellular carcinoma: nomogram based on clinical factors and contrast-enhanced ultrasound radiomics signatures. Abdom Radiol (NY) 2024; 49:1419-1431. [PMID: 38461433 DOI: 10.1007/s00261-024-04191-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 01/06/2024] [Accepted: 01/12/2024] [Indexed: 03/12/2024]
Abstract
PURPOSE To develop a contrast-enhanced ultrasound (CEUS) clinic-radiomics nomogram for individualized assessment of Ki-67 expression in hepatocellular carcinoma (HCC). METHODS A retrospective cohort comprising 310 HCC individuals who underwent preoperative CEUS (using SonoVue) at three different centers was partitioned into a training set, a validation set, and an external test set. Radiomics signatures indicating the phenotypes of the Ki-67 were extracted from multiphase CEUS images. The radiomics score (Rad-score) was calculated accordingly after feature selection and the radiomics model was constructed. A clinic-radiomics nomogram was established utilizing multiphase CEUS Rad-score and clinical risk factors. A clinical model only incorporated clinical factors was also developed for comparison. Regarding clinical utility, calibration, and discrimination, the predictive efficiency of the clinic-radiomics nomogram was evaluated. RESULTS Seven radiomics signatures from multiphase CEUS images were selected to calculate the Rad-score. The clinic-radiomics nomogram, comprising the Rad-score and clinical risk factors, indicated a good calibration and demonstrated a better discriminatory capacity compared to the clinical model (AUCs: 0.870 vs 0.797, 0.872 vs 0.755, 0.856 vs 0.749 in the training, validation, and external test set, respectively) and the radiomics model (AUCs: 0.870 vs 0.752, 0.872 vs 0.733, 0.856 vs 0.729 in the training, validation, and external test set, respectively). Furthermore, both the clinical impact curve and the decision curve analysis displayed good clinical application of the nomogram. CONCLUSION The clinic-radiomics nomogram constructed from multiphase CEUS images and clinical risk parameters can distinguish Ki-67 expression in HCC patients and offer useful insights to guide subsequent personalized treatment.
Collapse
Affiliation(s)
- Di Zhang
- Department of Ultrasound, The First Affiliated Hospital of Anhui Medical University, No. 218 Jixi Road, Hefei, 230022, Anhui, China
| | - Xian-Ya Zhang
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Jiefang Avenue No. 1095, Wuhan, 430030, Hubei, China
| | - Wen-Wu Lu
- Department of Ultrasound, The First Affiliated Hospital of Anhui Medical University, No. 218 Jixi Road, Hefei, 230022, Anhui, China
| | - Jin-Tang Liao
- Department of Diagnostic Ultrasound, Xiang Ya Hospital of Central South University, Changsha, 410000, Hunan, China
| | - Chao-Xue Zhang
- Department of Ultrasound, The First Affiliated Hospital of Anhui Medical University, No. 218 Jixi Road, Hefei, 230022, Anhui, China.
| | - Qi Tang
- Department of Ultrasonography, The First Hospital of Changsha, No. 311 Yingpan Road, Changsha, 410005, Hunan, China.
| | - Xin-Wu Cui
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Jiefang Avenue No. 1095, Wuhan, 430030, Hubei, China.
| |
Collapse
|
7
|
Qiu Y, Xie Z, Jiang Y, Ma J. Segment anything with inception module for automated segmentation of endometrium in ultrasound images. J Med Imaging (Bellingham) 2024; 11:034504. [PMID: 38827779 PMCID: PMC11137375 DOI: 10.1117/1.jmi.11.3.034504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2024] [Revised: 05/10/2024] [Accepted: 05/14/2024] [Indexed: 06/05/2024] Open
Abstract
Purpose Accurate segmentation of the endometrium in ultrasound images is essential for gynecological diagnostics and treatment planning. Manual segmentation methods are time-consuming and subjective, prompting the exploration of automated solutions. We introduce "segment anything with inception module" (SAIM), a specialized adaptation of the segment anything model, tailored specifically for the segmentation of endometrium structures in ultrasound images. Approach SAIM incorporates enhancements to the image encoder structure and integrates point prompts to guide the segmentation process. We utilized ultrasound images from patients undergoing hysteroscopic surgery in the gynecological department to train and evaluate the model. Results Our study demonstrates SAIM's superior segmentation performance through quantitative and qualitative evaluations, surpassing existing automated methods. SAIM achieves a dice similarity coefficient of 76.31% and an intersection over union score of 63.71%, outperforming traditional task-specific deep learning models and other SAM-based foundation models. Conclusions The proposed SAIM achieves high segmentation accuracy, providing high diagnostic precision and efficiency. Furthermore, it is potentially an efficient tool for junior medical professionals in education and diagnosis.
Collapse
Affiliation(s)
- Yang Qiu
- Beijing Zhongguancun Hospital, Beijing, China
| | - Zhun Xie
- Beihang University, School of Instrumentation and Opto-electric Engineering, Beijing, China
| | | | - Jianguo Ma
- Beihang University, School of Instrumentation and Opto-electric Engineering, Beijing, China
| |
Collapse
|
8
|
Serrano RA, Smeltz AM. The Promise of Artificial Intelligence-Assisted Point-of-Care Ultrasonography in Perioperative Care. J Cardiothorac Vasc Anesth 2024; 38:1244-1250. [PMID: 38402063 DOI: 10.1053/j.jvca.2024.01.034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Accepted: 01/29/2024] [Indexed: 02/26/2024]
Abstract
The role of point-of-care ultrasonography in the perioperative setting has expanded rapidly over recent years. Revolutionizing this technology further is integrating artificial intelligence to assist clinicians in optimizing images, identifying anomalies, performing automated measurements and calculations, and facilitating diagnoses. Artificial intelligence can increase point-of-care ultrasonography efficiency and accuracy, making it an even more valuable point-of-care tool. Given this topic's importance and ever-changing landscape, this review discusses the latest trends to serve as an introduction and update in this area.
Collapse
Affiliation(s)
| | - Alan M Smeltz
- University of North Carolina School of Medicine, Chapel Hill, NC
| |
Collapse
|
9
|
Aspalter S, Gmeiner M, Gasser S, Sonnberger M, Stroh N, Rauch P, Gruber A, Stefanits H. Feasibility, Clinical Potential, and Limitations of Trans-Burr Hole Ultrasound for Postoperative Evaluation of Chronic Subdural Hematoma: A Prospective Pilot Study. Neurosurgery 2024:00006123-990000000-01135. [PMID: 38647289 DOI: 10.1227/neu.0000000000002957] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Accepted: 03/01/2024] [Indexed: 04/25/2024] Open
Abstract
BACKGROUND AND OBJECTIVES Chronic subdural hematoma (CSDH) is commonly managed through burr hole surgery. Routine follow-up using computed tomography (CT) imaging is frequently used at many institutions, contributing to significant radiation exposure. This study evaluates the feasibility, safety, and reliability of trans-burr hole sonography as an alternative postoperative imaging modality, aiming to reduce radiation exposure by decreasing the frequency of CT scans. METHODS We conducted a prospective pilot study on 20 patients who underwent burr hole surgery for CSDH. Postoperative imaging included both CT and sonographic examinations through the burr hole. We assessed the ability to measure residual subdural fluid thickness under the burr hole sonographically compared with CT, the occurrence of complications, and the potential factors affecting sonographic image quality. The Pearson correlation coefficient was used to demonstrate relationships between CT and ultrasound and axial and coronal ultrasound. RESULTS Sonography through the burr hole was feasible in 73.5% of cases, providing measurements of residual fluid that closely paralleled CT findings, with an average discrepancy of 1.2 mm for axial and 1.4 mm for coronal sonographic views. A strong positive correlation was found between axial and coronal ultrasound (r = 0.955), CT and axial ultrasound (r = 0.936), and CT and coronal ultrasound (r = 0.920). The primary obstacle for sonographic imaging was the presence of air within the burr hole or the subdural space, which typically resolved over time after surgery. CONCLUSION Trans-burr hole sonography emerges as a promising technique for postoperative monitoring of CSDH, with the potential to safely reduce reliance on CT scans and associated radiation exposure in selected patients. Our results support further investigation into the extended use of sonography during the follow-up phase. Prospective multicenter studies are recommended to establish the method's efficacy and to explore strategies for minimizing air presence postsurgery.
Collapse
Affiliation(s)
- Stefan Aspalter
- Department of Neurosurgery, Kepler University Hospital Linz, Johannes Kepler University, Linz, Austria
| | - Matthias Gmeiner
- Department of Neurosurgery, Kepler University Hospital Linz, Johannes Kepler University, Linz, Austria
| | - Stefan Gasser
- Institute of Neuroradiology, Kepler University Hospital Linz, Johannes Kepler University, Linz, Austria
| | - Michael Sonnberger
- Institute of Neuroradiology, Kepler University Hospital Linz, Johannes Kepler University, Linz, Austria
| | - Nico Stroh
- Department of Neurosurgery, Kepler University Hospital Linz, Johannes Kepler University, Linz, Austria
| | - Philip Rauch
- Department of Neurosurgery, Kepler University Hospital Linz, Johannes Kepler University, Linz, Austria
| | - Andreas Gruber
- Department of Neurosurgery, Kepler University Hospital Linz, Johannes Kepler University, Linz, Austria
| | - Harald Stefanits
- Department of Neurosurgery, Kepler University Hospital Linz, Johannes Kepler University, Linz, Austria
| |
Collapse
|
10
|
Tamrat T, Zhao Y, Schalet D, AlSalamah S, Pujari S, Say L. Exploring the Use and Implications of AI in Sexual and Reproductive Health and Rights: Protocol for a Scoping Review. JMIR Res Protoc 2024; 13:e53888. [PMID: 38593433 PMCID: PMC11040437 DOI: 10.2196/53888] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Revised: 01/23/2024] [Accepted: 02/09/2024] [Indexed: 04/11/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI) has emerged as a transformative force across the health sector and has garnered significant attention within sexual and reproductive health and rights (SRHR) due to polarizing views on its opportunities to advance care and the heightened risks and implications it brings to people's well-being and bodily autonomy. As the fields of AI and SRHR evolve, clarity is needed to bridge our understanding of how AI is being used within this historically politicized health area and raise visibility on the critical issues that can facilitate its responsible and meaningful use. OBJECTIVE This paper presents the protocol for a scoping review to synthesize empirical studies that focus on the intersection of AI and SRHR. The review aims to identify the characteristics of AI systems and tools applied within SRHR, regarding health domains, intended purpose, target users, AI data life cycle, and evidence on benefits and harms. METHODS The scoping review follows the standard methodology developed by Arksey and O'Malley. We will search the following electronic databases: MEDLINE (PubMed), Scopus, Web of Science, and CINAHL. Inclusion criteria comprise the use of AI systems and tools in sexual and reproductive health and clear methodology describing either quantitative or qualitative approaches, including program descriptions. Studies will be excluded if they focus entirely on digital interventions that do not explicitly use AI systems and tools, are about robotics or nonhuman subjects, or are commentaries. We will not exclude articles based on geographic location, language, or publication date. The study will present the uses of AI across sexual and reproductive health domains, the intended purpose of the AI system and tools, and maturity within the AI life cycle. Outcome measures will be reported on the effect, accuracy, acceptability, resource use, and feasibility of studies that have deployed and evaluated AI systems and tools. Ethical and legal considerations, as well as findings from qualitative studies, will be synthesized through a narrative thematic analysis. We will use the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) format for the publication of the findings. RESULTS The database searches resulted in 12,793 records when the searches were conducted in October 2023. Screening is underway, and the analysis is expected to be completed by July 2024. CONCLUSIONS The findings will provide key insights on usage patterns and evidence on the use of AI in SRHR, as well as convey key ethical, safety, and legal considerations. The outcomes of this scoping review are contributing to a technical brief developed by the World Health Organization and will guide future research and practice in this highly charged area of work. TRIAL REGISTRATION OSF Registries osf.io/ma4d9; https://osf.io/ma4d9. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) PRR1-10.2196/53888.
Collapse
Affiliation(s)
- Tigest Tamrat
- UNDP/UNFPA/UNICEF/WHO/World Bank Special Programme of Research, Development and Research Training in Human Reproduction, Department of Sexual and Reproductive Health and Research, World Health Organization, Geneva, Switzerland
| | - Yu Zhao
- Department of Digital Health and Innovations, Science Division, World Health Organization, Geneva, Switzerland
| | - Denise Schalet
- Department of Digital Health and Innovations, Science Division, World Health Organization, Geneva, Switzerland
| | - Shada AlSalamah
- Department of Digital Health and Innovations, Science Division, World Health Organization, Geneva, Switzerland
- Information Systems Department, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| | - Sameer Pujari
- Department of Digital Health and Innovations, Science Division, World Health Organization, Geneva, Switzerland
| | - Lale Say
- UNDP/UNFPA/UNICEF/WHO/World Bank Special Programme of Research, Development and Research Training in Human Reproduction, Department of Sexual and Reproductive Health and Research, World Health Organization, Geneva, Switzerland
| |
Collapse
|
11
|
Khoche S, Ellis S, Kellogg L, Fahy J, Her B, Maus TM. The Year in Perioperative Echocardiography: Selected Highlights from 2023. J Cardiothorac Vasc Anesth 2024:S1053-0770(24)00240-4. [PMID: 38890085 DOI: 10.1053/j.jvca.2024.04.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/30/2024] [Accepted: 04/02/2024] [Indexed: 06/20/2024]
Abstract
This article is the eighth in an annual series reviewing the research highlights of the year pertaining to the subspecialty of perioperative echocardiography for the Journal of Cardiothoracic and Vascular Anesthesia. The authors thank the editor-in-chief, Dr Kaplan, and the editorial board for the opportunity to continue this series. In most cases, these will be research articles targeted at the perioperative echocardiographic diagnosis and treatment of patients after cardiothoracic surgery; but in some cases, the articles will target the use of perioperative echocardiography in general.
Collapse
Affiliation(s)
- Swapnil Khoche
- Department of Anesthesiology, UCSD Medical Center-Sulpizio Cardiovascular Center, La Jolla, CA
| | - Sarah Ellis
- Department of Anesthesiology, UCSD Medical Center-Sulpizio Cardiovascular Center, La Jolla, CA
| | - Levi Kellogg
- Department of Anesthesiology, UCSD Medical Center-Sulpizio Cardiovascular Center, La Jolla, CA
| | - John Fahy
- Department of Anesthesiology, UCSD Medical Center-Sulpizio Cardiovascular Center, La Jolla, CA
| | - Bin Her
- Department of Anesthesiology, UCSD Medical Center-Sulpizio Cardiovascular Center, La Jolla, CA
| | - Timothy M Maus
- Department of Anesthesiology, UCSD Medical Center-Sulpizio Cardiovascular Center, La Jolla, CA.
| |
Collapse
|
12
|
Wang R, Liu X, Tan G. Coupling speckle noise suppression with image classification for deep-learning-aided ultrasound diagnosis. Phys Med Biol 2024; 69:065001. [PMID: 38359452 DOI: 10.1088/1361-6560/ad29bb] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2023] [Accepted: 02/15/2024] [Indexed: 02/17/2024]
Abstract
Objective. During deep-learning-aided (DL-aided) ultrasound (US) diagnosis, US image classification is a foundational task. Due to the existence of serious speckle noise in US images, the performance of DL models may be degraded. Pre-denoising US images before their use in DL models is usually a logical choice. However, our investigation suggests that pre-speckle-denoising is not consistently advantageous. Furthermore, due to the decoupling of speckle denoising from the subsequent DL classification, investing intensive time in parameter tuning is inevitable to attain the optimal denoising parameters for various datasets and DL models. Pre-denoising will also add extra complexity to the classification task and make it no longer end-to-end.Approach. In this work, we propose a multi-scale high-frequency-based feature augmentation (MSHFFA) module that couples feature augmentation and speckle noise suppression with specific DL models, preserving an end-to-end fashion. In MSHFFA, the input US image is first decomposed to multi-scale low-frequency and high-frequency components (LFC and HFC) with discrete wavelet transform. Then, multi-scale augmentation maps are obtained by computing the correlation between LFC and HFC. Last, the original DL model features are augmented with multi-scale augmentation maps.Main results. On two public US datasets, all six renowned DL models exhibited enhanced F1-scores compared with their original versions (by 1.31%-8.17% on the POCUS dataset and 0.46%-3.89% on the BLU dataset) after using the MSHFFA module, with only approximately 1% increase in model parameter count.Significance. The proposed MSHFFA has broad applicability and commendable efficiency and thus can be used to enhance the performance of DL-aided US diagnosis. The codes are available athttps://github.com/ResonWang/MSHFFA.
Collapse
Affiliation(s)
- Ruixin Wang
- College of Computer Science and Software Engineering, Hohai University, Nanjing 210098, People's Republic of China
| | - Xiaohui Liu
- The First People's Hospital of Kunshan, Affiliated Kunshan Hospital of Jiangsu University, Kunshan 215300, People's Republic of China
| | - Guoping Tan
- College of Computer Science and Software Engineering, Hohai University, Nanjing 210098, People's Republic of China
| |
Collapse
|
13
|
Lokaj B, Pugliese MT, Kinkel K, Lovis C, Schmid J. Barriers and facilitators of artificial intelligence conception and implementation for breast imaging diagnosis in clinical practice: a scoping review. Eur Radiol 2024; 34:2096-2109. [PMID: 37658895 PMCID: PMC10873444 DOI: 10.1007/s00330-023-10181-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 06/07/2023] [Accepted: 07/10/2023] [Indexed: 09/05/2023]
Abstract
OBJECTIVE Although artificial intelligence (AI) has demonstrated promise in enhancing breast cancer diagnosis, the implementation of AI algorithms in clinical practice encounters various barriers. This scoping review aims to identify these barriers and facilitators to highlight key considerations for developing and implementing AI solutions in breast cancer imaging. METHOD A literature search was conducted from 2012 to 2022 in six databases (PubMed, Web of Science, CINHAL, Embase, IEEE, and ArXiv). The articles were included if some barriers and/or facilitators in the conception or implementation of AI in breast clinical imaging were described. We excluded research only focusing on performance, or with data not acquired in a clinical radiology setup and not involving real patients. RESULTS A total of 107 articles were included. We identified six major barriers related to data (B1), black box and trust (B2), algorithms and conception (B3), evaluation and validation (B4), legal, ethical, and economic issues (B5), and education (B6), and five major facilitators covering data (F1), clinical impact (F2), algorithms and conception (F3), evaluation and validation (F4), and education (F5). CONCLUSION This scoping review highlighted the need to carefully design, deploy, and evaluate AI solutions in clinical practice, involving all stakeholders to yield improvement in healthcare. CLINICAL RELEVANCE STATEMENT The identification of barriers and facilitators with suggested solutions can guide and inform future research, and stakeholders to improve the design and implementation of AI for breast cancer detection in clinical practice. KEY POINTS • Six major identified barriers were related to data; black-box and trust; algorithms and conception; evaluation and validation; legal, ethical, and economic issues; and education. • Five major identified facilitators were related to data, clinical impact, algorithms and conception, evaluation and validation, and education. • Coordinated implication of all stakeholders is required to improve breast cancer diagnosis with AI.
Collapse
Affiliation(s)
- Belinda Lokaj
- Geneva School of Health Sciences, HES-SO University of Applied Sciences and Arts Western Switzerland, Delémont, Switzerland.
- Faculty of Medicine, University of Geneva, Geneva, Switzerland.
- Division of Medical Information Sciences, Geneva University Hospitals, Geneva, Switzerland.
| | - Marie-Thérèse Pugliese
- Geneva School of Health Sciences, HES-SO University of Applied Sciences and Arts Western Switzerland, Delémont, Switzerland
| | - Karen Kinkel
- Réseau Hospitalier Neuchâtelois, Neuchâtel, Switzerland
| | - Christian Lovis
- Faculty of Medicine, University of Geneva, Geneva, Switzerland
- Division of Medical Information Sciences, Geneva University Hospitals, Geneva, Switzerland
| | - Jérôme Schmid
- Geneva School of Health Sciences, HES-SO University of Applied Sciences and Arts Western Switzerland, Delémont, Switzerland
| |
Collapse
|
14
|
Xu T, Zhang XY, Yang N, Jiang F, Chen GQ, Pan XF, Peng YX, Cui XW. A narrative review on the application of artificial intelligence in renal ultrasound. Front Oncol 2024; 13:1252630. [PMID: 38495082 PMCID: PMC10943690 DOI: 10.3389/fonc.2023.1252630] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Accepted: 12/12/2023] [Indexed: 03/19/2024] Open
Abstract
Kidney disease is a serious public health problem and various kidney diseases could progress to end-stage renal disease. The many complications of end-stage renal disease. have a significant impact on the physical and mental health of patients. Ultrasound can be the test of choice for evaluating the kidney and perirenal tissue as it is real-time, available and non-radioactive. To overcome substantial interobserver variability in renal ultrasound interpretation, artificial intelligence (AI) has the potential to be a new method to help radiologists make clinical decisions. This review introduces the applications of AI in renal ultrasound, including automatic segmentation of the kidney, measurement of the renal volume, prediction of the kidney function, diagnosis of the kidney diseases. The advantages and disadvantages of the applications will also be presented clinicians to conduct research. Additionally, the challenges and future perspectives of AI are discussed.
Collapse
Affiliation(s)
- Tong Xu
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Xian-Ya Zhang
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Na Yang
- Department of Ultrasound, Affiliated Hospital of Jilin Medical College, Jilin, China
| | - Fan Jiang
- Department of Medical Ultrasound, The Second Hospital of Anhui Medical University, Hefei, China
| | - Gong-Quan Chen
- Department of Medical Ultrasound, Minda Hospital of Hubei Minzu University, Enshi, China
| | - Xiao-Fang Pan
- Health Medical Department, Dalian Municipal Central Hospital, Dalian, China
| | - Yue-Xiang Peng
- Department of Ultrasound, Wuhan Third Hospital, Tongren Hospital of Wuhan University, Wuhan, China
| | - Xin-Wu Cui
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
15
|
Grignaffini F, Barbuto F, Troiano M, Piazzo L, Simeoni P, Mangini F, De Stefanis C, Onetti Muda A, Frezza F, Alisi A. The Use of Artificial Intelligence in the Liver Histopathology Field: A Systematic Review. Diagnostics (Basel) 2024; 14:388. [PMID: 38396427 PMCID: PMC10887838 DOI: 10.3390/diagnostics14040388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 02/02/2024] [Accepted: 02/06/2024] [Indexed: 02/25/2024] Open
Abstract
Digital pathology (DP) has begun to play a key role in the evaluation of liver specimens. Recent studies have shown that a workflow that combines DP and artificial intelligence (AI) applied to histopathology has potential value in supporting the diagnosis, treatment evaluation, and prognosis prediction of liver diseases. Here, we provide a systematic review of the use of this workflow in the field of hepatology. Based on the PRISMA 2020 criteria, a search of the PubMed, SCOPUS, and Embase electronic databases was conducted, applying inclusion/exclusion filters. The articles were evaluated by two independent reviewers, who extracted the specifications and objectives of each study, the AI tools used, and the results obtained. From the 266 initial records identified, 25 eligible studies were selected, mainly conducted on human liver tissues. Most of the studies were performed using whole-slide imaging systems for imaging acquisition and applying different machine learning and deep learning methods for image pre-processing, segmentation, feature extractions, and classification. Of note, most of the studies selected demonstrated good performance as classifiers of liver histological images compared to pathologist annotations. Promising results to date bode well for the not-too-distant inclusion of these techniques in clinical practice.
Collapse
Affiliation(s)
- Flavia Grignaffini
- Department of Information Engineering, Electronics and Telecommunications (DIET), “La Sapienza”, University of Rome, 00184 Rome, Italy; (F.G.); (F.B.); (L.P.); (F.M.); (F.F.)
| | - Francesco Barbuto
- Department of Information Engineering, Electronics and Telecommunications (DIET), “La Sapienza”, University of Rome, 00184 Rome, Italy; (F.G.); (F.B.); (L.P.); (F.M.); (F.F.)
| | - Maurizio Troiano
- Research Unit of Genetics of Complex Phenotypes, Bambino Gesù Children’s Hospital, IRCCS, 00165 Rome, Italy; (M.T.); (C.D.S.)
| | - Lorenzo Piazzo
- Department of Information Engineering, Electronics and Telecommunications (DIET), “La Sapienza”, University of Rome, 00184 Rome, Italy; (F.G.); (F.B.); (L.P.); (F.M.); (F.F.)
| | - Patrizio Simeoni
- National Transport Authority (NTA), D02 WT20 Dublin, Ireland;
- Faculty of Lifelong Learning, South East Technological University (SETU), R93 V960 Carlow, Ireland
| | - Fabio Mangini
- Department of Information Engineering, Electronics and Telecommunications (DIET), “La Sapienza”, University of Rome, 00184 Rome, Italy; (F.G.); (F.B.); (L.P.); (F.M.); (F.F.)
| | - Cristiano De Stefanis
- Research Unit of Genetics of Complex Phenotypes, Bambino Gesù Children’s Hospital, IRCCS, 00165 Rome, Italy; (M.T.); (C.D.S.)
| | | | - Fabrizio Frezza
- Department of Information Engineering, Electronics and Telecommunications (DIET), “La Sapienza”, University of Rome, 00184 Rome, Italy; (F.G.); (F.B.); (L.P.); (F.M.); (F.F.)
| | - Anna Alisi
- Research Unit of Genetics of Complex Phenotypes, Bambino Gesù Children’s Hospital, IRCCS, 00165 Rome, Italy; (M.T.); (C.D.S.)
| |
Collapse
|
16
|
Wang M, Liu Z, Ma L. Application of artificial intelligence in ultrasound imaging for predicting lymph node metastasis in breast cancer: A meta-analysis. Clin Imaging 2024; 106:110048. [PMID: 38065024 DOI: 10.1016/j.clinimag.2023.110048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 11/22/2023] [Accepted: 11/27/2023] [Indexed: 01/15/2024]
Abstract
BACKGROUND This study aims to comprehensively evaluate the accuracy and effectiveness of ultrasound imaging based on artificial intelligence algorithms in predicting lymph node metastasis in breast cancer patients through a meta-analysis. METHODS We systematically searched PubMed, Embase, and Cochrane Library for literature published up to May 2023. The search terms included artificial intelligence, ultrasound, breast cancer, and lymph node. Studies meeting the inclusion criteria were selected, and data were extracted for analysis. The main evaluation indicators included sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and area under the curve (AUC). The heterogeneity was assessed using the Cochrane Q test combined with the I^2 statistic expressing the percentage of total effect variation that can be attributed to the effect variation between studies, as recommended by the Cochrane Handbook for heterogeneity quantification. A threshold p-value of 0.10 was considered to compensate for the low power of the Q test. Sensitivity analysis was performed to assess the stability of individual studies, and publication bias was determined with funnel plots. Additionally, fagan plots were used to assess clinical utility. RESULTS Ten studies involving 4726 breast cancer patients were included in the meta-analysis. The results showed that ultrasound imaging based on artificial intelligence algorithms had high accuracy and effectiveness in predicting lymph node metastasis in breast cancer patients. The pooled sensitivity was 0.88 (95% CI: 0.81-0.93; P < 0.001; I2 = 84.68), specificity was 0.75 (95% CI: 0.66-0.83; P < 0.001; I2 = 91.11), and AUC was 0.89 (95% CI: 0.86-0.91). The positive likelihood ratio was 3.5 (95% CI: 2.6-4.8), negative likelihood ratio was 0.16 (95% CI: 0.10-0.26), and diagnostic odds ratio was 23 (95% CI: 13-40). However, the combined sensitivity of ultrasound imaging based on non-AI algorithms for predicting lymph node metastasis in breast cancer patients was 0.78 (95%CI: 0.63-0.88), the specificity was 0.76 (95%CI: 0.63-0.86), and the AUC was 0.84 (95%CI: 0.80-0.87). The positive likelihood ratio was 3.3 (95% CI: 1.9-5.6), the negative likelihood ratio was 0.29 (95% CI: 0.15-0.54), and the diagnostic odds ratio was 11 (95% CI: 4-33). Due to limited sample size (n = 2), meta-analysis was not conducted for the outcome of predicting lymph node metastasis burden. CONCLUSION Ultrasound imaging based on artificial intelligence algorithms holds promise in predicting lymph node metastasis in breast cancer patients, demonstrating high accuracy and effectiveness. The application of this technology helps in the diagnosis and treatment decisions for breast cancer patients and is expected to become an important tool in future clinical practice.
Collapse
Affiliation(s)
- Minghui Wang
- Department of Breast Surgery, Affiliate Hospital of Chengde Medical University, Hebei 067000, China
| | - Zihui Liu
- Department of Pathology, Affiliate Hospital of Chengde Medical University, Hebei 067000, China
| | - Lihui Ma
- Department of Breast Surgery, Affiliate Hospital of Chengde Medical University, Hebei 067000, China.
| |
Collapse
|
17
|
Harutyunyan R, Jeffries SD, Morse J, Hemmerling TM. Beyond the Echo: The Evolution and Revolution of Ultrasound in Anesthesia. Anesth Analg 2024; 138:369-375. [PMID: 38215715 DOI: 10.1213/ane.0000000000006834] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2024]
Abstract
This article explores the evolving role of ultrasound technology in anesthesia. Ultrasound emerged decades ago, offering clinicians noninvasive, economical, radiation-free, and real-time imaging capabilities. It might seem that such an old technology with apparent limitations might have had its day, but this review discusses both the current applications of ultrasound (in nerve blocks, vascular access, and airway management) and then, more speculatively, shows how integration of advanced ultrasound modalities such as contrast-enhanced imaging with virtual reality (VR), or nanotechnology can alter perioperative patient care. This article will also explore the potential of robotics and artificial intelligence (AI) in augmenting ultrasound-guided anesthetic procedures and their implications for medical practice and education.
Collapse
Affiliation(s)
- Robert Harutyunyan
- From the Department of Experimental Surgery, McGill University Health Center, Montreal, Quebec, Canada
| | - Sean D Jeffries
- From the Department of Experimental Surgery, McGill University Health Center, Montreal, Quebec, Canada
| | - Joshua Morse
- From the Department of Experimental Surgery, McGill University Health Center, Montreal, Quebec, Canada
| | - Thomas M Hemmerling
- From the Department of Experimental Surgery, McGill University Health Center, Montreal, Quebec, Canada
- Department of Anesthesia, McGill University, Montreal, Quebec, Canada
| |
Collapse
|
18
|
Coffey K, Aukland B, Amir T, Sevilimedu V, Saphier NB, Mango VL. Artificial Intelligence Decision Support for Triple-Negative Breast Cancers on Ultrasound. JOURNAL OF BREAST IMAGING 2024; 6:33-44. [PMID: 38243859 DOI: 10.1093/jbi/wbad080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Indexed: 01/22/2024]
Abstract
OBJECTIVE To assess performance of an artificial intelligence (AI) decision support software in assessing and recommending biopsy of triple-negative breast cancers (TNBCs) on US. METHODS Retrospective institutional review board-approved review identified patients diagnosed with TNBC after US-guided biopsy between 2009 and 2019. Artificial intelligence output for TNBCs on diagnostic US included lesion features (shape, orientation) and likelihood of malignancy category (benign, probably benign, suspicious, and probably malignant). Artificial intelligence true positive was defined as suspicious or probably malignant and AI false negative (FN) as benign or probably benign. Artificial intelligence and radiologist lesion feature agreement, AI and radiologist sensitivity and FN rate (FNR), and features associated with AI FNs were determined using Wilcoxon rank-sum test, Fisher's exact test, chi-square test of independence, and kappa statistics. RESULTS The study included 332 patients with 345 TNBCs. Artificial intelligence and radiologists demonstrated moderate agreement for lesion shape and orientation (k = 0.48 and k = 0.47, each P <.001). On the set of examinations using 6 earlier diagnostic US, radiologists recommended biopsy of 339/345 lesions (sensitivity 98.3%, FNR 1.7%), and AI recommended biopsy of 333/345 lesions (sensitivity 96.5%, FNR 3.5%), including 6/6 radiologist FNs. On the set of examinations using immediate prebiopsy diagnostic US, AI recommended biopsy of 331/345 lesions (sensitivity 95.9%, FNR 4.1%). Artificial intelligence FNs were more frequently oval (q < 0.001), parallel (q < 0.001), circumscribed (q = 0.04), and complex cystic and solid (q = 0.006). CONCLUSION Artificial intelligence accurately recommended biopsies for 96% to 97% of TNBCs on US and may assist radiologists in classifying these lesions, which often demonstrate benign sonographic features.
Collapse
Affiliation(s)
- Kristen Coffey
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Brianna Aukland
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Tali Amir
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Varadan Sevilimedu
- Department of Biostatistics and Epidemiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Nicole B Saphier
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Victoria L Mango
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| |
Collapse
|
19
|
Liu X, Qin X, Luo Q, Qiao J, Xiao W, Zhu Q, Liu J, Zhang C. A Transvaginal Ultrasound-Based Deep Learning Model for the Noninvasive Diagnosis of Myometrial Invasion in Patients with Endometrial Cancer: Comparison with Radiologists. Acad Radiol 2024:S1076-6332(23)00713-4. [PMID: 38182443 DOI: 10.1016/j.acra.2023.12.035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Revised: 12/19/2023] [Accepted: 12/20/2023] [Indexed: 01/07/2024]
Abstract
RATIONALE AND OBJECTIVES This study aimed to determine the feasibility of using the deep learning (DL) method to determine the degree (whether myometrial invasion [MI] >50%) of MI in patients with endometrial cancer (EC) based on ultrasound (US) images. MATERIALS AND METHODS From September 2017 to April 2023, 1289 US images of 604 patients with EC who underwent surgical resection at center 1, center 2 or center 3 were obtained and divided into a training set and an internal validation set. Ninety-five patients from center 4 and center 5 were randomly selected as the external testing set according to the same criteria as those for the primary cohort. This study evaluated three DL models trained on the training set and tested them on the validation and testing sets. The models' performance was analyzed based on accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC), and the performance of the models was subsequently compared with that of 15 radiologists. RESULTS In the final clinical diagnosis of MI in patients with EC, EfficientNet-B6 showed the best performance in the testing set in terms of area under the curve (AUC) [0.814, 95% CI (0.746-0.882]; accuracy [0.802, 95% CI (0.733-0.855]; sensitivity [0.623]; specificity [0.879]; positive likelihood ratio (PLR) [6.750]; and negative likelihood ratio (NLR) [0.389]. The diagnostic efficacy of EfficientNet-B6 was significantly better than that of the 15 radiologists, with an average diagnostic accuracy of 0.681, average AUC of 0.678, AUC of the best performance of 0.739, accuracy of 0.716, sensitivity of 0.806, specificity 0.672, PLR2.457, and NLR 0.289. CONCLUSION Based on the preoperative US images of patients with EC, the DL model can accurately determine the degree of endometrial MI; the performance of this model is significantly better than that of radiologists, and it can effectively assist in clinical treatment decisions.
Collapse
Affiliation(s)
- Xiaoling Liu
- Department of Ultrasound, The First Affiliated Hospital of Anhui Medical University, No. 218 Jixi Rd, Shushan District, Hefei, 230022, Anhui, China (X.L., X.Q., Q.L., Q.Z., C.Z.); Department of Ultrasound, Nanchong Central Hospital, The Second Clinical Medical College, North Sichuan Medical College (University), Nanchong, Sichuan, China (X.L., X.Q.)
| | - Xiachuan Qin
- Department of Ultrasound, The First Affiliated Hospital of Anhui Medical University, No. 218 Jixi Rd, Shushan District, Hefei, 230022, Anhui, China (X.L., X.Q., Q.L., Q.Z., C.Z.); Department of Ultrasound, Nanchong Central Hospital, The Second Clinical Medical College, North Sichuan Medical College (University), Nanchong, Sichuan, China (X.L., X.Q.)
| | - Qi Luo
- Department of Ultrasound, The First Affiliated Hospital of Anhui Medical University, No. 218 Jixi Rd, Shushan District, Hefei, 230022, Anhui, China (X.L., X.Q., Q.L., Q.Z., C.Z.)
| | - Jing Qiao
- Department of Ultrasound, Affiliated Hospital of North Sichuan Medical College, Nanchong, Sichuan, China (J.Q.)
| | - Weihan Xiao
- North Sichuan Medical College, Nanchong, China (W.X.)
| | - Qiwei Zhu
- Department of Ultrasound, The First Affiliated Hospital of Anhui Medical University, No. 218 Jixi Rd, Shushan District, Hefei, 230022, Anhui, China (X.L., X.Q., Q.L., Q.Z., C.Z.)
| | - Jian Liu
- Department of Ultrasound, The First Affiliated Hospital of Chengdu Medical College, Chengdu, China (J.L.)
| | - Chaoxue Zhang
- Department of Ultrasound, The First Affiliated Hospital of Anhui Medical University, No. 218 Jixi Rd, Shushan District, Hefei, 230022, Anhui, China (X.L., X.Q., Q.L., Q.Z., C.Z.).
| |
Collapse
|
20
|
Bellomo TR, Goudot G, Lella SK, Landau E, Sumetsky N, Zacharias N, Fischetti C, Dua A. Feasibility of Encord Artificial Intelligence Annotation of Arterial Duplex Ultrasound Images. Diagnostics (Basel) 2023; 14:46. [PMID: 38201355 PMCID: PMC10795888 DOI: 10.3390/diagnostics14010046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Revised: 12/16/2023] [Accepted: 12/22/2023] [Indexed: 01/12/2024] Open
Abstract
DUS measurements for popliteal artery aneurysms (PAAs) specifically can be time-consuming, error-prone, and operator-dependent. To eliminate this subjectivity and provide efficient segmentation, we applied artificial intelligence (AI) to accurately delineate inner and outer lumen on DUS. DUS images were selected from a cohort of patients with PAAs from a multi-institutional platform. Encord is an easy-to-use, readily available online AI platform that was used to segment both the inner lumen and outer lumen of the PAA on DUS images. A model trained on 20 images and tested on 80 images had a mean Average Precision of 0.85 for the outer polygon and 0.23 for the inner polygon. The outer polygon had a higher recall score than precision score at 0.90 and 0.85, respectively. The inner polygon had a score of 0.25 for both precision and recall. The outer polygon false-negative rate was the lowest in images with the least amount of blur. This study demonstrates the feasibility of using the widely available Encord AI platform to identify standard features of PAAs that are critical for operative decision making.
Collapse
Affiliation(s)
- Tiffany R. Bellomo
- Division of Vascular and Endovascular Surgery, Massachusetts General Hospital, Boston, MA 02114, USA; (G.G.); (S.K.L.); (N.S.); (N.Z.); (A.D.)
- Harvard Medical School, Massachusetts General Hospital, Boston, MA 02114, USA;
| | - Guillaume Goudot
- Division of Vascular and Endovascular Surgery, Massachusetts General Hospital, Boston, MA 02114, USA; (G.G.); (S.K.L.); (N.S.); (N.Z.); (A.D.)
| | - Srihari K. Lella
- Division of Vascular and Endovascular Surgery, Massachusetts General Hospital, Boston, MA 02114, USA; (G.G.); (S.K.L.); (N.S.); (N.Z.); (A.D.)
- Harvard Medical School, Massachusetts General Hospital, Boston, MA 02114, USA;
| | - Eric Landau
- Encord, Cord Technologies Inc., New York City, NY 10013, USA;
| | - Natalie Sumetsky
- Division of Vascular and Endovascular Surgery, Massachusetts General Hospital, Boston, MA 02114, USA; (G.G.); (S.K.L.); (N.S.); (N.Z.); (A.D.)
| | - Nikolaos Zacharias
- Division of Vascular and Endovascular Surgery, Massachusetts General Hospital, Boston, MA 02114, USA; (G.G.); (S.K.L.); (N.S.); (N.Z.); (A.D.)
- Harvard Medical School, Massachusetts General Hospital, Boston, MA 02114, USA;
| | - Chanel Fischetti
- Harvard Medical School, Massachusetts General Hospital, Boston, MA 02114, USA;
- Department of Emergency Medicine, Brigham and Women’s Hospital, Boston, MA 02115, USA
| | - Anahita Dua
- Division of Vascular and Endovascular Surgery, Massachusetts General Hospital, Boston, MA 02114, USA; (G.G.); (S.K.L.); (N.S.); (N.Z.); (A.D.)
- Harvard Medical School, Massachusetts General Hospital, Boston, MA 02114, USA;
| |
Collapse
|
21
|
Li JW, Sheng DL, Chen JG, You C, Liu S, Xu HX, Chang C. Artificial intelligence in breast imaging: potentials and challenges. Phys Med Biol 2023; 68:23TR01. [PMID: 37722385 DOI: 10.1088/1361-6560/acfade] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2023] [Accepted: 09/18/2023] [Indexed: 09/20/2023]
Abstract
Breast cancer, which is the most common type of malignant tumor among humans, is a leading cause of death in females. Standard treatment strategies, including neoadjuvant chemotherapy, surgery, postoperative chemotherapy, targeted therapy, endocrine therapy, and radiotherapy, are tailored for individual patients. Such personalized therapies have tremendously reduced the threat of breast cancer in females. Furthermore, early imaging screening plays an important role in reducing the treatment cycle and improving breast cancer prognosis. The recent innovative revolution in artificial intelligence (AI) has aided radiologists in the early and accurate diagnosis of breast cancer. In this review, we introduce the necessity of incorporating AI into breast imaging and the applications of AI in mammography, ultrasonography, magnetic resonance imaging, and positron emission tomography/computed tomography based on published articles since 1994. Moreover, the challenges of AI in breast imaging are discussed.
Collapse
Affiliation(s)
- Jia-Wei Li
- Department of Medical Ultrasound, Fudan University Shanghai Cancer Center; Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
| | - Dan-Li Sheng
- Department of Medical Ultrasound, Fudan University Shanghai Cancer Center; Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
| | - Jian-Gang Chen
- Shanghai Key Laboratory of Multidimensional Information Processing, School of Communication & Electronic Engineering, East China Normal University, People's Republic of China
| | - Chao You
- Department of Radiology, Fudan University Shanghai Cancer Center; Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, People's Republic of China
| | - Shuai Liu
- Department of Nuclear Medicine, Fudan University Shanghai Cancer Center; Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, People's Republic of China
| | - Hui-Xiong Xu
- Department of Ultrasound, Zhongshan Hospital, Institute of Ultrasound in Medicine and Engineering, Fudan University, Shanghai, 200032, People's Republic of China
| | - Cai Chang
- Department of Medical Ultrasound, Fudan University Shanghai Cancer Center; Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
| |
Collapse
|
22
|
Fujima N, Nakagawa J, Kameda H, Ikebe Y, Harada T, Shimizu Y, Tsushima N, Kano S, Homma A, Kwon J, Yoneyama M, Kudo K. Improvement of image quality in diffusion-weighted imaging with model-based deep learning reconstruction for evaluations of the head and neck. MAGMA (NEW YORK, N.Y.) 2023:10.1007/s10334-023-01129-4. [PMID: 37989922 DOI: 10.1007/s10334-023-01129-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Revised: 10/18/2023] [Accepted: 10/23/2023] [Indexed: 11/23/2023]
Abstract
OBJECTIVES To investigate the utility of deep learning (DL)-based image reconstruction using a model-based approach in head and neck diffusion-weighted imaging (DWI). MATERIALS AND METHODS We retrospectively analyzed the cases of 41 patients who underwent head/neck DWI. The DWI in 25 patients demonstrated an untreated lesion. We performed qualitative and quantitative assessments in the DWI analyses with both deep learning (DL)- and conventional parallel imaging (PI)-based reconstructions. For the qualitative assessment, we visually evaluated the overall image quality, soft tissue conspicuity, degree of artifact(s), and lesion conspicuity based on a five-point system. In the quantitative assessment, we measured the signal-to-noise ratio (SNR) of the bilateral parotid glands, submandibular gland, the posterior muscle, and the lesion. We then calculated the contrast-to-noise ratio (CNR) between the lesion and the adjacent muscle. RESULTS Significant differences were observed in the qualitative analysis between the DWI with PI-based and DL-based reconstructions for all of the evaluation items (p < 0.001). In the quantitative analysis, significant differences in the SNR and CNR between the DWI with PI-based and DL-based reconstructions were observed for all of the evaluation items (p = 0.002 ~ p < 0.001). DISCUSSION DL-based image reconstruction with the model-based technique effectively provided sufficient image quality in head/neck DWI.
Collapse
Affiliation(s)
- Noriyuki Fujima
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, N14 W5, Kita-Ku, Sapporo, 060-8638, Japan.
| | - Junichi Nakagawa
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, N14 W5, Kita-Ku, Sapporo, 060-8638, Japan
| | - Hiroyuki Kameda
- Faculty of Dental Medicine Department of Radiology, Hokkaido University, N13 W7, Kita-Ku, Sapporo, Hokkaido, 060-8586, Japan
| | - Yohei Ikebe
- Department of Diagnostic Imaging, Graduate School of Medicine, Hokkaido University, N15 W7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan
- Center for Cause of Death Investigation, Faculty of Medicine, Hokkaido University, N15 W7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan
| | - Taisuke Harada
- Center for Cause of Death Investigation, Faculty of Medicine, Hokkaido University, N15 W7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan
| | - Yukie Shimizu
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, N14 W5, Kita-Ku, Sapporo, 060-8638, Japan
| | - Nayuta Tsushima
- Department of Otolaryngology-Head and Neck Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N15 W7, Kita Ku, Sapporo, 060-8638, Japan
| | - Satoshi Kano
- Department of Otolaryngology-Head and Neck Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N15 W7, Kita Ku, Sapporo, 060-8638, Japan
| | - Akihiro Homma
- Department of Otolaryngology-Head and Neck Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N15 W7, Kita Ku, Sapporo, 060-8638, Japan
| | - Jihun Kwon
- Philips Japan, 3-37 Kohnan 2-Chome, Minato-Ku, Tokyo, 108-8507, Japan
| | - Masami Yoneyama
- Philips Japan, 3-37 Kohnan 2-Chome, Minato-Ku, Tokyo, 108-8507, Japan
| | - Kohsuke Kudo
- Department of Diagnostic Imaging, Graduate School of Medicine, Hokkaido University, N15 W7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan
- Medical AI Research and Development Center, Hokkaido University Hospital, N14 W5, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan
- Global Center for Biomedical Science and Engineering, Faculty of Medicine, Hokkaido University, N14 W5, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan
| |
Collapse
|
23
|
Jost E, Kosian P, Jimenez Cruz J, Albarqouni S, Gembruch U, Strizek B, Recker F. Evolving the Era of 5D Ultrasound? A Systematic Literature Review on the Applications for Artificial Intelligence Ultrasound Imaging in Obstetrics and Gynecology. J Clin Med 2023; 12:6833. [PMID: 37959298 PMCID: PMC10649694 DOI: 10.3390/jcm12216833] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 10/17/2023] [Accepted: 10/25/2023] [Indexed: 11/15/2023] Open
Abstract
Artificial intelligence (AI) has gained prominence in medical imaging, particularly in obstetrics and gynecology (OB/GYN), where ultrasound (US) is the preferred method. It is considered cost effective and easily accessible but is time consuming and hindered by the need for specialized training. To overcome these limitations, AI models have been proposed for automated plane acquisition, anatomical measurements, and pathology detection. This study aims to overview recent literature on AI applications in OB/GYN US imaging, highlighting their benefits and limitations. For the methodology, a systematic literature search was performed in the PubMed and Cochrane Library databases. Matching abstracts were screened based on the PICOS (Participants, Intervention or Exposure, Comparison, Outcome, Study type) scheme. Articles with full text copies were distributed to the sections of OB/GYN and their research topics. As a result, this review includes 189 articles published from 1994 to 2023. Among these, 148 focus on obstetrics and 41 on gynecology. AI-assisted US applications span fetal biometry, echocardiography, or neurosonography, as well as the identification of adnexal and breast masses, and assessment of the endometrium and pelvic floor. To conclude, the applications for AI-assisted US in OB/GYN are abundant, especially in the subspecialty of obstetrics. However, while most studies focus on common application fields such as fetal biometry, this review outlines emerging and still experimental fields to promote further research.
Collapse
Affiliation(s)
- Elena Jost
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Philipp Kosian
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Jorge Jimenez Cruz
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Shadi Albarqouni
- Department of Diagnostic and Interventional Radiology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
- Helmholtz AI, Helmholtz Munich, Ingolstädter Landstraße 1, 85764 Neuherberg, Germany
| | - Ulrich Gembruch
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Brigitte Strizek
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Florian Recker
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| |
Collapse
|
24
|
Qin X, Xia L, Ma Q, Cheng D, Zhang C. Development of a novel combined nomogram model integrating deep learning radiomics to diagnose IgA nephropathy clinically. Ren Fail 2023; 45:2271104. [PMID: 37860932 PMCID: PMC10591537 DOI: 10.1080/0886022x.2023.2271104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Accepted: 10/10/2023] [Indexed: 10/21/2023] Open
Abstract
This study aimed to develop and validate a combined nomogram model based on superb microvascular imaging (SMI)-based deep learning (DL), radiomics characteristics, and clinical factors for noninvasive differentiation between immunoglobulin A nephropathy (IgAN) and non-IgAN.We prospectively enrolled patients with chronic kidney disease who underwent renal biopsy from May 2022 to December 2022 and performed an ultrasound and SMI the day before renal biopsy. The selected patients were randomly divided into training and testing cohorts in a 7:3 ratio. We extracted DL and radiometric features from the two-dimensional ultrasound and SMI images. A combined nomograph model was developed by combining the predictive probability of DL with clinical factors using multivariate logistic regression analysis. The proposed model's utility was evaluated using receiver operating characteristics, calibration, and decision curve analysis. In this study, 120 patients with primary glomerular disease were included, including 84 in the training and 36 in the test cohorts. In the testing cohort, the ROC of the radiomics model was 0.816 (95% CI:0.663-0.968), and the ROC of the DL model was 0.844 (95% CI:0.717-0.971). The nomogram model combined with independent clinical risk factors (IgA and hematuria) showed strong discrimination, with an ROC of 0.884 (95% CI:0.773-0.996) in the testing cohort. Decision curve analysis verified the clinical practicability of the combined nomogram. The combined nomogram model based on SMI can accurately and noninvasively distinguish IgAN from non-IgAN and help physicians make clearer patient treatment plans.
Collapse
Affiliation(s)
- Xiachuan Qin
- Department of Ultrasound, Nanchong Central Hospital, The Second Clinical Medical College, North Sichuan Medical College (University), Nan Chong, Sichuan Province, China
- Department of Ultrasound, The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui Province, China
| | - Linlin Xia
- Department of Ultrasound, The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui Province, China
| | - Qianqing Ma
- Department of Ultrasound, The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui Province, China
| | - Dongliang Cheng
- Hebin Intelligent Robots Co., LTD, Hefei, Anhui Province, China
| | - Chaoxue Zhang
- Department of Ultrasound, The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui Province, China
| |
Collapse
|
25
|
Noel GPJC. Evaluating AI-powered text-to-image generators for anatomical illustration: A comparative study. ANATOMICAL SCIENCES EDUCATION 2023. [PMID: 37694692 DOI: 10.1002/ase.2336] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Revised: 09/04/2023] [Accepted: 08/29/2023] [Indexed: 09/12/2023]
Abstract
Medical illustration, which involves the creation of visual representations of anatomy, has long been an essential tool for medical professionals and educators. The integration of AI and medical illustration has the potential to revolutionize the field of anatomy education, providing highly accurate, customizable images. The authors evaluated three AI-powered text-to-image generators in producing anatomical illustrations of the human skulls, heart, and brain. The generators were assessed for their accurate depiction of foramina, suture lines, coronary arteries, aortic and pulmonary trunk branching, gyri, sulci, and the relationship between the cerebellum and temporal lobes. None of the generators produced illustrations with comprehensive anatomical details. Foramina, such as the mental and supraorbital foramina, were frequently omitted, and suture lines were inaccurately represented. The illustrations of the heart failed to indicate proper coronary artery origins, and the branching of the aorta and pulmonary trunk was often incorrect. Brain illustrations lacked accurate gyri and sulci depiction, and the relationship between the cerebellum and temporal lobes remained unclear. Although AI generators tended toward esoteric imagery, they exhibited significant speed and cost advantages over human illustrators. However, improving their accuracy necessitates augmenting the training databases with anatomically correct images. The study emphasizes the ongoing role of human medical illustrators, especially in ensuring the provision of accurate and accessible illustrations.
Collapse
Affiliation(s)
- Geoffroy P J C Noel
- Division of Anatomy, Department of Surgery, University of California, San Diego, La Jolla, California, USA
- Division of Anatomical Sciences, Department of Anatomy and Cell Biology, McGill University, Montreal, Québec, Canada
- Institute of Health Sciences Education, Faculty of Medicine, McGill University, Montreal, Québec, Canada
| |
Collapse
|
26
|
Stremmel C, Breitschwerdt R. Digital Transformation in the Diagnostics and Therapy of Cardiovascular Diseases: Comprehensive Literature Review. JMIR Cardio 2023; 7:e44983. [PMID: 37647103 PMCID: PMC10500361 DOI: 10.2196/44983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2022] [Revised: 06/12/2023] [Accepted: 08/07/2023] [Indexed: 09/01/2023] Open
Abstract
BACKGROUND The digital transformation of our health care system has experienced a clear shift in the last few years due to political, medical, and technical innovations and reorganization. In particular, the cardiovascular field has undergone a significant change, with new broad perspectives in terms of optimized treatment strategies for patients nowadays. OBJECTIVE After a short historical introduction, this comprehensive literature review aimed to provide a detailed overview of the scientific evidence regarding digitalization in the diagnostics and therapy of cardiovascular diseases (CVDs). METHODS We performed an extensive literature search of the PubMed database and included all related articles that were published as of March 2022. Of the 3021 studies identified, 1639 (54.25%) studies were selected for a structured analysis and presentation (original articles: n=1273, 77.67%; reviews or comments: n=366, 22.33%). In addition to studies on CVDs in general, 829 studies could be assigned to a specific CVD with a diagnostic and therapeutic approach. For data presentation, all 829 publications were grouped into 6 categories of CVDs. RESULTS Evidence-based innovations in the cardiovascular field cover a wide medical spectrum, starting from the diagnosis of congenital heart diseases or arrhythmias and overoptimized workflows in the emergency care setting of acute myocardial infarction to telemedical care for patients having chronic diseases such as heart failure, coronary artery disease, or hypertension. The use of smartphones and wearables as well as the integration of artificial intelligence provides important tools for location-independent medical care and the prevention of adverse events. CONCLUSIONS Digital transformation has opened up multiple new perspectives in the cardiovascular field, with rapidly expanding scientific evidence. Beyond important improvements in terms of patient care, these innovations are also capable of reducing costs for our health care system. In the next few years, digital transformation will continue to revolutionize the field of cardiovascular medicine and broaden our medical and scientific horizons.
Collapse
|
27
|
Suter B, Anthis AHC, Zehnder A, Mergen V, Rosendorf J, Gerken LRH, Schlegel AA, Korcakova E, Liska V, Herrmann IK. Surgical Sealant with Integrated Shape-Morphing Dual Modality Ultrasound and Computed Tomography Sensors for Gastric Leak Detection. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2023; 10:e2301207. [PMID: 37276437 PMCID: PMC10427398 DOI: 10.1002/advs.202301207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Revised: 04/26/2023] [Indexed: 06/07/2023]
Abstract
Postoperative anastomotic leaks are the most feared complications after gastric surgery. For diagnostics clinicians mostly rely on clinical symptoms such as fever and tachycardia, often developing as a result of an already fully developed, i.e., symptomatic, surgical leak. A gastric fluid responsive, dual modality, electronic-free, leak sensor system integrable into surgical adhesive suture support materials is introduced. Leak sensors contain high atomic number carbonates embedded in a polyacrylamide matrix, that upon exposure to gastric fluid convert into gaseous carbon dioxide (CO2 ). CO2 bubbles remain entrapped in the hydrogel matrix, leading to a distinctly increased echogenic contrast detectable by a low-cost and portable ultrasound transducer, while the dissolution of the carbonate species and the resulting diffusion of the cation produces a markedly reduced contrast in computed tomography imaging. The sensing elements can be patterned into a variety of characteristic shapes and can be combined with nonreactive tantalum oxide reference elements, allowing the design of shape-morphing sensing elements visible to the naked eye as well as artificial intelligence-assisted automated detection. In summary, shape-morphing dual modality sensors for the early and robust detection of postoperative complications at deep tissue sites, opening new routes for postoperative patient surveillance using existing hospital infrastructure is reported.
Collapse
Affiliation(s)
- Benjamin Suter
- Nanoparticle Systems Engineering LaboratoryInstitute of Energy and Process Engineering (IEPE)Department of Mechanical and Process Engineering (D‐MAVT)ETH ZurichSonneggstrasse 3Zürich8092Switzerland
- Particles‐Biology InteractionsDepartment of Materials Meet LifeSwiss Federal Laboratories for Materials Science and Technology (Empa)Lerchenfeldstrasse 5St. Gallen9014Switzerland
| | - Alexandre H. C. Anthis
- Nanoparticle Systems Engineering LaboratoryInstitute of Energy and Process Engineering (IEPE)Department of Mechanical and Process Engineering (D‐MAVT)ETH ZurichSonneggstrasse 3Zürich8092Switzerland
- Particles‐Biology InteractionsDepartment of Materials Meet LifeSwiss Federal Laboratories for Materials Science and Technology (Empa)Lerchenfeldstrasse 5St. Gallen9014Switzerland
| | - Anna‐Katharina Zehnder
- Nanoparticle Systems Engineering LaboratoryInstitute of Energy and Process Engineering (IEPE)Department of Mechanical and Process Engineering (D‐MAVT)ETH ZurichSonneggstrasse 3Zürich8092Switzerland
| | - Victor Mergen
- Diagnostic and Interventional RadiologyUniversity Hospital ZurichUniversity of ZurichRämistrasse 100Zürich8091Switzerland
| | - Jachym Rosendorf
- Department of SurgeryFaculty of Medicine in PilsenCharles UniversityAlej Svobody 923/80Pilsen32300Czech Republic
- Biomedical CenterFaculty of Medicine in PilsenCharles UniversityAlej Svobody 1655/76Pilsen32300Czech Republic
| | - Lukas R. H. Gerken
- Nanoparticle Systems Engineering LaboratoryInstitute of Energy and Process Engineering (IEPE)Department of Mechanical and Process Engineering (D‐MAVT)ETH ZurichSonneggstrasse 3Zürich8092Switzerland
- Particles‐Biology InteractionsDepartment of Materials Meet LifeSwiss Federal Laboratories for Materials Science and Technology (Empa)Lerchenfeldstrasse 5St. Gallen9014Switzerland
| | - Andrea A. Schlegel
- Department of Surgery and TransplantationSwiss HPB CentreUniversity Hospital ZurichRämistrasse 100Zurich8091Switzerland
- Fondazione IRCCS Ca' GrandaOspedale Maggiore PoliclinicoCentre of Preclinical ResearchMilan20122Italy
- Transplantation Center, Digestive Disease and Surgery Institute and Department of Immunity and Inflammation, Lerner Research InstituteCleveland Clinic9620 Carnegie AveClevelandOH44106United States
| | - Eva Korcakova
- Biomedical CenterFaculty of Medicine in PilsenCharles UniversityAlej Svobody 1655/76Pilsen32300Czech Republic
- Department of Imaging MethodsFaculty of Medicine in Pilsen, Charles UniversityAlej Svobody 80Pilsen30460Czech Republic
| | - Vaclav Liska
- Department of SurgeryFaculty of Medicine in PilsenCharles UniversityAlej Svobody 923/80Pilsen32300Czech Republic
- Biomedical CenterFaculty of Medicine in PilsenCharles UniversityAlej Svobody 1655/76Pilsen32300Czech Republic
| | - Inge K. Herrmann
- Nanoparticle Systems Engineering LaboratoryInstitute of Energy and Process Engineering (IEPE)Department of Mechanical and Process Engineering (D‐MAVT)ETH ZurichSonneggstrasse 3Zürich8092Switzerland
- Particles‐Biology InteractionsDepartment of Materials Meet LifeSwiss Federal Laboratories for Materials Science and Technology (Empa)Lerchenfeldstrasse 5St. Gallen9014Switzerland
| |
Collapse
|
28
|
Xin C, Li B, Wang D, Chen W, Yue S, Meng D, Qiao X, Zhang Y. Deep learning for the rapid automatic segmentation of forearm muscle boundaries from ultrasound datasets. Front Physiol 2023; 14:1166061. [PMID: 37520832 PMCID: PMC10374344 DOI: 10.3389/fphys.2023.1166061] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2023] [Accepted: 06/28/2023] [Indexed: 08/01/2023] Open
Abstract
Ultrasound (US) is widely used in the clinical diagnosis and treatment of musculoskeletal diseases. However, the low efficiency and non-uniformity of artificial recognition hinder the application and popularization of US for this purpose. Herein, we developed an automatic muscle boundary segmentation tool for US image recognition and tested its accuracy and clinical applicability. Our dataset was constructed from a total of 465 US images of the flexor digitorum superficialis (FDS) from 19 participants (10 men and 9 women, age 27.4 ± 6.3 years). We used the U-net model for US image segmentation. The U-net output often includes several disconnected regions. Anatomically, the target muscle usually only has one connected region. Based on this principle, we designed an algorithm written in C++ to eliminate redundantly connected regions of outputs. The muscle boundary images generated by the tool were compared with those obtained by professionals and junior physicians to analyze their accuracy and clinical applicability. The dataset was divided into five groups for experimentation, and the average Dice coefficient, recall, and accuracy, as well as the intersection over union (IoU) of the prediction set in each group were all about 90%. Furthermore, we propose a new standard to judge the segmentation results. Under this standard, 99% of the total 150 predicted images by U-net are excellent, which is very close to the segmentation result obtained by professional doctors. In this study, we developed an automatic muscle segmentation tool for US-guided muscle injections. The accuracy of the recognition of the muscle boundary was similar to that of manual labeling by a specialist sonographer, providing a reliable auxiliary tool for clinicians to shorten the US learning cycle, reduce the clinical workload, and improve injection safety.
Collapse
Affiliation(s)
- Chen Xin
- Rehabilitation Center, Qilu Hospital of Shandong University, Jinan, China
| | - Baoxu Li
- School of Mathematics, Shandong University, Jinan, China
| | - Dezheng Wang
- Rehabilitation Center, Qilu Hospital of Shandong University, Jinan, China
| | - Wei Chen
- Department of Biomedical Engineering, School of Control Science and Engineering, Shandong University, Jinan, Shandong, China
| | - Shouwei Yue
- Rehabilitation Center, Qilu Hospital of Shandong University, Jinan, China
| | - Dong Meng
- Rehabilitation Center, Qilu Hospital of Shandong University, Jinan, China
| | - Xu Qiao
- Department of Biomedical Engineering, School of Control Science and Engineering, Shandong University, Jinan, Shandong, China
| | - Yang Zhang
- Rehabilitation Center, Qilu Hospital of Shandong University, Jinan, China
| |
Collapse
|
29
|
Prieto-Fernández A, Sánchez-Barroso G, González-Domínguez J, García-Sanz-Calcedo J. Interaction between maintenance variables of medical ultrasound scanners through multifactor dimensionality reduction. Expert Rev Med Devices 2023; 20:851-864. [PMID: 37522639 DOI: 10.1080/17434440.2023.2243208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 06/14/2023] [Accepted: 06/22/2023] [Indexed: 08/01/2023]
Abstract
BACKGROUND Proper maintenance of electro-medical devices is crucial for the quality of care to patients and the economic performance of healthcare organizations. This research aims to identify the interaction between Ultrasound scanners (US) maintenance variables as a function of maintenance indicators: US in service or decommissioned, excessive number of failures, and failure rate. Knowing those interactions, specific maintenance measures will be developed to improve the reliability of the US. RESEARCH DESIGN AND METHODS Multifactor Dimensionality Reduction (MDR) method was eployed to analyze data from 222 US and their four-year maintenance history. Models were developed based on the variables with the greatest influence on maintenance indicators, where US were classified according to the associated risk. RESULTS US with more than one major failure or at least one major component replacement had up to 496.4% more failures than the average. Failure rate increased by up to 188.7% over the average for those US with more than three moderate failures, three replacements, or both. CONCLUSIONS This study identifies and quantifies the causes of risk to establish a specific maintenance plan for US. It helps to better understand the degradation of US to optimize their operation and maintenance.
Collapse
Affiliation(s)
| | - Gonzalo Sánchez-Barroso
- Engineering Projects Area, School of Industrial Engineering, University of Extremadura, Badajoz, Spain
| | - Jaime González-Domínguez
- Engineering Projects Area, School of Industrial Engineering, University of Extremadura, Badajoz, Spain
| | - Justo García-Sanz-Calcedo
- Engineering Projects Area, School of Industrial Engineering, University of Extremadura, Badajoz, Spain
| |
Collapse
|
30
|
Zhang XY, Wei Q, Wu GG, Tang Q, Pan XF, Chen GQ, Zhang D, Dietrich CF, Cui XW. Artificial intelligence - based ultrasound elastography for disease evaluation - a narrative review. Front Oncol 2023; 13:1197447. [PMID: 37333814 PMCID: PMC10272784 DOI: 10.3389/fonc.2023.1197447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 05/22/2023] [Indexed: 06/20/2023] Open
Abstract
Ultrasound elastography (USE) provides complementary information of tissue stiffness and elasticity to conventional ultrasound imaging. It is noninvasive and free of radiation, and has become a valuable tool to improve diagnostic performance with conventional ultrasound imaging. However, the diagnostic accuracy will be reduced due to high operator-dependence and intra- and inter-observer variability in visual observations of radiologists. Artificial intelligence (AI) has great potential to perform automatic medical image analysis tasks to provide a more objective, accurate and intelligent diagnosis. More recently, the enhanced diagnostic performance of AI applied to USE have been demonstrated for various disease evaluations. This review provides an overview of the basic concepts of USE and AI techniques for clinical radiologists and then introduces the applications of AI in USE imaging that focus on the following anatomical sites: liver, breast, thyroid and other organs for lesion detection and segmentation, machine learning (ML) - assisted classification and prognosis prediction. In addition, the existing challenges and future trends of AI in USE are also discussed.
Collapse
Affiliation(s)
- Xian-Ya Zhang
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Qi Wei
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Ge-Ge Wu
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Qi Tang
- Department of Ultrasonography, The First Hospital of Changsha, Changsha, China
| | - Xiao-Fang Pan
- Health Medical Department, Dalian Municipal Central Hospital, Dalian, China
| | - Gong-Quan Chen
- Department of Medical Ultrasound, Minda Hospital of Hubei Minzu University, Enshi, China
| | - Di Zhang
- Department of Medical Ultrasound, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | | | - Xin-Wu Cui
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
31
|
Chavez MR, Butler TS, Rekawek P, Heo H, Kinzler WL. Chat Generative Pre-trained Transformer: why we should embrace this technology. Am J Obstet Gynecol 2023; 228:706-711. [PMID: 36924908 DOI: 10.1016/j.ajog.2023.03.010] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Revised: 03/06/2023] [Accepted: 03/06/2023] [Indexed: 03/17/2023]
Abstract
With the advent of artificial intelligence that not only can learn from us but also can communicate with us in plain language, humans are embarking on a brave new future. The interaction between humans and artificial intelligence has never been so widespread. Chat Generative Pre-trained Transformer is an artificial intelligence resource that has potential uses in the practice of medicine. As clinicians, we have the opportunity to help guide and develop new ways to use this powerful tool. Optimal use of any tool requires a certain level of comfort. This is best achieved by appreciating its power and limitations. Being part of the process is crucial in maximizing its use in our field. This clinical opinion demonstrates the potential uses of Chat Generative Pre-trained Transformer for obstetrician-gynecologists and encourages readers to serve as the driving force behind this resource.
Collapse
Affiliation(s)
- Martin R Chavez
- Division of Maternal-Fetal Medicine, Department of Obstetrics Gynecology, New York University Langone Hospital-Long Island, New York University Long Island School of Medicine, Mineola, NY.
| | - Thomas S Butler
- New York University Langone Reproductive Specialists of New York, New York University Langone Hospital-Long Island, New York University Langone Long Island School of Medicine, Mineola, New York
| | - Patricia Rekawek
- Division of Maternal-Fetal Medicine, Department of Obstetrics Gynecology, New York University Langone Hospital-Long Island, New York University Long Island School of Medicine, Mineola, NY
| | - Hye Heo
- Division of Maternal-Fetal Medicine, Department of Obstetrics Gynecology, New York University Langone Hospital-Long Island, New York University Long Island School of Medicine, Mineola, NY
| | - Wendy L Kinzler
- Division of Maternal-Fetal Medicine, Department of Obstetrics Gynecology, New York University Langone Hospital-Long Island, New York University Long Island School of Medicine, Mineola, NY
| |
Collapse
|
32
|
Csore J, Karmonik C, Wilhoit K, Buckner L, Roy TL. Automatic Classification of Magnetic Resonance Histology of Peripheral Arterial Chronic Total Occlusions Using a Variational Autoencoder: A Feasibility Study. Diagnostics (Basel) 2023; 13:diagnostics13111925. [PMID: 37296778 DOI: 10.3390/diagnostics13111925] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 05/18/2023] [Accepted: 05/22/2023] [Indexed: 06/12/2023] Open
Abstract
The novel approach of our study consists in adapting and in evaluating a custom-made variational autoencoder (VAE) using two-dimensional (2D) convolutional neural networks (CNNs) on magnetic resonance imaging (MRI) images for differentiate soft vs. hard plaque components in peripheral arterial disease (PAD). Five amputated lower extremities were imaged at a clinical ultra-high field 7 Tesla MRI. Ultrashort echo time (UTE), T1-weighted (T1w) and T2-weighted (T2w) datasets were acquired. Multiplanar reconstruction (MPR) images were obtained from one lesion per limb. Images were aligned to each other and pseudo-color red-green-blue images were created. Four areas in latent space were defined corresponding to the sorted images reconstructed by the VAE. Images were classified from their position in latent space and scored using tissue score (TS) as following: (1) lumen patent, TS:0; (2) partially patent, TS:1; (3) mostly occluded with soft tissue, TS:3; (4) mostly occluded with hard tissue, TS:5. Average and relative percentage of TS was calculated per lesion defined as the sum of the tissue score for each image divided by the total number of images. In total, 2390 MPR reconstructed images were included in the analysis. Relative percentage of average tissue score varied from only patent (lesion #1) to presence of all four classes. Lesions #2, #3 and #5 were classified to contain tissues except mostly occluded with hard tissue while lesion #4 contained all (ranges (I): 0.2-100%, (II): 46.3-75.9%, (III): 18-33.5%, (IV): 20%). Training the VAE was successful as images with soft/hard tissues in PAD lesions were satisfactory separated in latent space. Using VAE may assist in rapid classification of MRI histology images acquired in a clinical setup for facilitating endovascular procedures.
Collapse
Affiliation(s)
- Judit Csore
- DeBakey Heart and Vascular Center, Houston Methodist Hospital, 6565 Fannin Street, Houston, TX 77030, USA
- Heart and Vascular Center, Semmelweis University, 68 Városmajor Street, 1122 Budapest, Hungary
| | - Christof Karmonik
- MRI Core, Translational Imaging Center, Houston Methodist Research Institute, 6670 Bertner Avenue, Houston, 77030 TX, USA
| | - Kayla Wilhoit
- MRI Core, Translational Imaging Center, Houston Methodist Research Institute, 6670 Bertner Avenue, Houston, 77030 TX, USA
| | - Lily Buckner
- MRI Core, Translational Imaging Center, Houston Methodist Research Institute, 6670 Bertner Avenue, Houston, 77030 TX, USA
| | - Trisha L Roy
- DeBakey Heart and Vascular Center, Houston Methodist Hospital, 6565 Fannin Street, Houston, TX 77030, USA
| |
Collapse
|
33
|
Amir T, Coffey K, Sevilimedu V, Fardanesh R, Mango VL. A role for breast ultrasound Artificial Intelligence decision support in the evaluation of small invasive lobular carcinomas. Clin Imaging 2023; 101:77-85. [PMID: 37311398 DOI: 10.1016/j.clinimag.2023.05.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 04/24/2023] [Accepted: 05/08/2023] [Indexed: 06/15/2023]
Abstract
OBJECTIVE To evaluate the diagnostic performance of an Artificial Intelligence (AI) decision support (DS) system in the ultrasound (US) assessment of invasive lobular carcinoma (ILC) of the breast, a cancer that can demonstrate variable appearance and present insidiously. METHODS Retrospective review was performed of 75 patients with 83 ILC diagnosed by core biopsy or surgery between November 2017 and November 2019. ILC characteristics (size, shape, echogenicity) were recorded. AI DS output (lesion characteristics, likelihood of malignancy) was compared to radiologist assessment. RESULTS The AI DS system interpreted 100% of ILCs as suspicious or probably malignant (100% sensitivity, and 0% false negative rate). 99% (82/83) of detected ILCs were initially recommended for biopsy by the interpreting breast radiologist, and 100% (83/83) were recommended for biopsy after one additional ILC was identified on same-day repeat diagnostic ultrasound. For lesions in which the AI DS output was probably malignant, but assigned a BI-RADS 4 assessment by the radiologist, the median lesion size was 1 cm, compared with a median lesion size of 1.4 cm for those given a BI-RADS 5 assessment (p = 0.006). These results suggest that AI may offer more useful DS in smaller sub-centimeter lesions in which shape, margin status, or vascularity is more difficult to discern. Only 20% of patients with ILC were assigned a BI-RADS 5 assessment by the radiologist. CONCLUSION The AI DS accurately characterized 100% of detected ILC lesions as suspicious or probably malignant. AI DS may be helpful in increasing radiologist confidence when assessing ILC on ultrasound.
Collapse
Affiliation(s)
- Tali Amir
- Department of Radiology, Memorial Sloan Kettering Cancer Center, 1275 York Avenue, New York, NY 10065, United States of America.
| | - Kristen Coffey
- Department of Radiology, Memorial Sloan Kettering Cancer Center, 1275 York Avenue, New York, NY 10065, United States of America.
| | - Varadan Sevilimedu
- Department of Epidemiology and Biostatistics, Memorial Sloan Kettering Cancer Center, 485 Lexington Ave, 2nd floor, New York, NY 10017, United States of America.
| | - Reza Fardanesh
- Department of Radiology, University of California Los Angeles, 1250 16th St, Suite 2340, Santa Monica, CA 90404, United States of America.
| | - Victoria L Mango
- Department of Radiology, Memorial Sloan Kettering Cancer Center, 1275 York Avenue, New York, NY 10065, United States of America.
| |
Collapse
|
34
|
Xiao S, Zhang J, Zhu Y, Zhang Z, Cao H, Xie M, Zhang L. Application and Progress of Artificial Intelligence in Fetal Ultrasound. J Clin Med 2023; 12:jcm12093298. [PMID: 37176738 PMCID: PMC10179567 DOI: 10.3390/jcm12093298] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Revised: 04/01/2023] [Accepted: 04/28/2023] [Indexed: 05/15/2023] Open
Abstract
Prenatal ultrasonography is the most crucial imaging modality during pregnancy. However, problems such as high fetal mobility, excessive maternal abdominal wall thickness, and inter-observer variability limit the development of traditional ultrasound in clinical applications. The combination of artificial intelligence (AI) and obstetric ultrasound may help optimize fetal ultrasound examination by shortening the examination time, reducing the physician's workload, and improving diagnostic accuracy. AI has been successfully applied to automatic fetal ultrasound standard plane detection, biometric parameter measurement, and disease diagnosis to facilitate conventional imaging approaches. In this review, we attempt to thoroughly review the applications and advantages of AI in prenatal fetal ultrasound and discuss the challenges and promises of this new field.
Collapse
Affiliation(s)
- Sushan Xiao
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Junmin Zhang
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Ye Zhu
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Zisang Zhang
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Haiyan Cao
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Mingxing Xie
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Li Zhang
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| |
Collapse
|
35
|
Pang J, Xiu W, Ma X. Application of Artificial Intelligence in the Diagnosis, Treatment, and Prognostic Evaluation of Mediastinal Malignant Tumors. J Clin Med 2023; 12:jcm12082818. [PMID: 37109155 PMCID: PMC10144939 DOI: 10.3390/jcm12082818] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 03/01/2023] [Accepted: 04/06/2023] [Indexed: 04/29/2023] Open
Abstract
Artificial intelligence (AI), also known as machine intelligence, is widely utilized in the medical field, promoting medical advances. Malignant tumors are the critical focus of medical research and improvement of clinical diagnosis and treatment. Mediastinal malignancy is an important tumor that attracts increasing attention today due to the difficulties in treatment. Combined with artificial intelligence, challenges from drug discovery to survival improvement are constantly being overcome. This article reviews the progress of the use of AI in the diagnosis, treatment, and prognostic prospects of mediastinal malignant tumors based on current literature findings.
Collapse
Affiliation(s)
- Jiyun Pang
- Division of Thoracic Tumor Multimodality Treatment, Cancer Center, West China Hospital, Sichuan University, Chengdu 610041, China
- State Key Laboratory of Biotherapy, Cancer Center, West China Hospital, Sichuan University, Chengdu 610041, China
- West China School of Medicine, Sichuan University, Chengdu 610041, China
| | - Weigang Xiu
- Division of Thoracic Tumor Multimodality Treatment, Cancer Center, West China Hospital, Sichuan University, Chengdu 610041, China
- State Key Laboratory of Biotherapy, Cancer Center, West China Hospital, Sichuan University, Chengdu 610041, China
- West China School of Medicine, Sichuan University, Chengdu 610041, China
| | - Xuelei Ma
- Department of Biotherapy, Cancer Center, West China Hospital, Sichuan University, Chengdu 610041, China
| |
Collapse
|
36
|
Manuel Román-Belmonte J, De la Corte-Rodríguez H, Adriana Rodríguez-Damiani B, Carlos Rodríguez-Merchán E. Artificial Intelligence in Musculoskeletal Conditions. ARTIF INTELL 2023. [DOI: 10.5772/intechopen.110696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/05/2023]
Abstract
Artificial intelligence (AI) refers to computer capabilities that resemble human intelligence. AI implies the ability to learn and perform tasks that have not been specifically programmed. Moreover, it is an iterative process involving the ability of computerized systems to capture information, transform it into knowledge, and process it to produce adaptive changes in the environment. A large labeled database is needed to train the AI system and generate a robust algorithm. Otherwise, the algorithm cannot be applied in a generalized way. AI can facilitate the interpretation and acquisition of radiological images. In addition, it can facilitate the detection of trauma injuries and assist in orthopedic and rehabilitative processes. The applications of AI in musculoskeletal conditions are promising and are likely to have a significant impact on the future management of these patients.
Collapse
|
37
|
Tahmasebi A, Wang S, Wessner CE, Vu T, Liu JB, Forsberg F, Civan J, Guglielmo FF, Eisenbrey JR. Ultrasound-Based Machine Learning Approach for Detection of Nonalcoholic Fatty Liver Disease. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2023. [PMID: 36807314 DOI: 10.1002/jum.16194] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 12/05/2022] [Accepted: 01/25/2023] [Indexed: 06/18/2023]
Abstract
OBJECTIVES Current diagnosis of nonalcoholic fatty liver disease (NAFLD) relies on biopsy or MR-based fat quantification. This prospective study explored the use of ultrasound with artificial intelligence for the detection of NAFLD. METHODS One hundred and twenty subjects with clinical suspicion of NAFLD and 10 healthy volunteers consented to participate in this institutional review board-approved study. Subjects were categorized as NAFLD and non-NAFLD according to MR proton density fat fraction (PDFF) findings. Ultrasound images from 10 different locations in the right and left hepatic lobes were collected following a standard protocol. MRI-based liver fat quantification was used as the reference standard with >6.4% indicative of NAFLD. A supervised machine learning model was developed for assessment of NAFLD. To validate model performance, a balanced testing dataset of 24 subjects was used. Sensitivity, specificity, positive predictive value, negative predictive value, and overall accuracy with 95% confidence interval were calculated. RESULTS A total of 1119 images from 106 participants was used for model development. The internal evaluation achieved an average precision of 0.941, recall of 88.2%, and precision of 89.0%. In the testing set AutoML achieved a sensitivity of 72.2% (63.1%-80.1%), specificity of 94.6% (88.7%-98.0%), positive predictive value (PPV) of 93.1% (86.0%-96.7%), negative predictive value of 77.3% (71.6%-82.1%), and accuracy of 83.4% (77.9%-88.0%). The average agreement for an individual subject was 92%. CONCLUSIONS An ultrasound-based machine learning model for identification of NAFLD showed high specificity and PPV in this prospective trial. This approach may in the future be used as an inexpensive and noninvasive screening tool for identifying NAFLD in high-risk patients.
Collapse
Affiliation(s)
- Aylin Tahmasebi
- Department of Radiology, Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| | - Shuo Wang
- Department of Radiology, Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| | - Corinne E Wessner
- Department of Radiology, Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| | - Trang Vu
- Department of Radiology, Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| | - Ji-Bin Liu
- Department of Radiology, Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| | - Flemming Forsberg
- Department of Radiology, Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| | - Jesse Civan
- Department of Medicine, Division of Gastroenterology and Hepatology, Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| | - Flavius F Guglielmo
- Department of Radiology, Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| | - John R Eisenbrey
- Department of Radiology, Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| |
Collapse
|
38
|
Albakr A, Ben-Israel D, Yang R, Kruger A, Alhothali W, Al Towim A, Lama S, Ajlan A, Riva-Cambrin J, Prada F, Al-Habib A, Sutherland GR. Ultrasound Elastography in Neurosurgery: Current Applications and Future Perspectives. World Neurosurg 2023; 170:195-205.e1. [PMID: 36336268 DOI: 10.1016/j.wneu.2022.10.108] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Revised: 10/25/2022] [Accepted: 10/26/2022] [Indexed: 11/06/2022]
Abstract
BACKGROUND Similar to clinical palpation, Ultrasound elastography (USE) helps distinguish between tissues by providing information on their elasticity. While it has been widely explored and has been applied to many body organs, USE has not been studied as extensively for application in neurosurgery. The current systematic review was performed to identify articles related to the use of interoperative USE in neurosurgery. METHODS Search included MEDLINE(R) database. Only original peer-reviewed full-text articles were included. No language or publication year restrictions were imposed. Two independent reviewers assessed the search results for relevance. The identified articles were screened by title, abstract, and full-text review. RESULTS Seventeen articles were included in the qualitative analysis and 13 articles were related to oncology, epilepsy (n = 3), and spine (n = 1). In oncology, USE was found useful in defining tumor stiffness, aiding surgical planning, detecting residual tumors, discriminating between tumor and brain tissue, and differentiating between different tumors. In epilepsy, USE could improve the detection of epileptogenic foci, thereby enhancing the prospect of complete and safe resection. The application in spinal surgery was limited to demonstrating that a compressed spinal cord is stiffer than the decompressed one. CONCLUSIONS USE was found to be a safe, quick, portable, and economic tool that was a useful intraoperative adjunct to provide information corresponding to a variety of neurosurgical diseases, at different stages of surgery. This review describes the current intraoperative neurosurgical applications of USE, the concept of elasticity, and different USE modalities as well as the technical challenges, limitations, and possible future implications.
Collapse
Affiliation(s)
- Abdulrahman Albakr
- Department of Clinical Neurosciences, University of Calgary, Calgary, Alberta, Canada; Division of Neurosurgery, Department of Surgery, King Saud University, Riyadh, Saudi Arabia; Project neuroArm, Department of Clinical Neurosciences, and Hotchkiss Brain Institute, University of Calgary, Calgary, Alberta, Canada
| | - David Ben-Israel
- Department of Clinical Neurosciences, University of Calgary, Calgary, Alberta, Canada
| | - Runze Yang
- Department of Clinical Neurosciences, University of Calgary, Calgary, Alberta, Canada
| | - Alexander Kruger
- Project neuroArm, Department of Clinical Neurosciences, and Hotchkiss Brain Institute, University of Calgary, Calgary, Alberta, Canada
| | - Wajda Alhothali
- Division of Neurosurgery, Department of Surgery, King Saud University, Riyadh, Saudi Arabia
| | - Abdullah Al Towim
- Division of Neurosurgery, Department of Surgery, King Saud University, Riyadh, Saudi Arabia
| | - Sanju Lama
- Department of Clinical Neurosciences, University of Calgary, Calgary, Alberta, Canada; Project neuroArm, Department of Clinical Neurosciences, and Hotchkiss Brain Institute, University of Calgary, Calgary, Alberta, Canada
| | - Abdulrazag Ajlan
- Division of Neurosurgery, Department of Surgery, King Saud University, Riyadh, Saudi Arabia
| | - Jay Riva-Cambrin
- Department of Clinical Neurosciences, University of Calgary, Calgary, Alberta, Canada; Alberta Children's Hospital Research Institute, University of Calgary, Calgary, Alberta, Canada
| | - Francesco Prada
- Department of Neurological Surgery, University of Virginia Health System, Charlottesville, Virginia, USA; Acoustic Neuroimaging and Therapy Laboratory, Fondazione IRCCS Istituto Neurologico Carlo Besta, Milan, Italy; Focused Ultrasound Foundation, Charlottesville, Virginia, USA
| | - Amro Al-Habib
- Division of Neurosurgery, Department of Surgery, King Saud University, Riyadh, Saudi Arabia
| | - Garnette R Sutherland
- Department of Clinical Neurosciences, University of Calgary, Calgary, Alberta, Canada; Project neuroArm, Department of Clinical Neurosciences, and Hotchkiss Brain Institute, University of Calgary, Calgary, Alberta, Canada.
| |
Collapse
|
39
|
Yang F, Cai B, Gu H, Wang F. Comparative Investigation on the Structure and Properties of Protein Films from Domestic and Wild Silkworms through Ultrasonic Regeneration. J Mol Struct 2023. [DOI: 10.1016/j.molstruc.2023.135255] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/03/2023]
|
40
|
Ni C, Feng B, Yao J, Zhou X, Shen J, Ou D, Peng C, Xu D. Value of deep learning models based on ultrasonic dynamic videos for distinguishing thyroid nodules. Front Oncol 2023; 12:1066508. [PMID: 36733368 PMCID: PMC9887311 DOI: 10.3389/fonc.2022.1066508] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Accepted: 12/28/2022] [Indexed: 01/18/2023] Open
Abstract
Objective This study was designed to distinguish benign and malignant thyroid nodules by using deep learning(DL) models based on ultrasound dynamic videos. Methods Ultrasound dynamic videos of 1018 thyroid nodules were retrospectively collected from 657 patients in Zhejiang Cancer Hospital from January 2020 to December 2020 for the tests with 5 DL models. Results In the internal test set, the area under the receiver operating characteristic curve (AUROC) was 0.929(95% CI: 0.888,0.970) for the best-performing model LSTM Two radiologists interpreted the dynamic video with AUROC values of 0.760 (95% CI: 0.653, 0.867) and 0.815 (95% CI: 0.778, 0.853). In the external test set, the best-performing DL model had AUROC values of 0.896(95% CI: 0.847,0.945), and two ultrasound radiologist had AUROC values of 0.754 (95% CI: 0.649,0.850) and 0.833 (95% CI: 0.797,0.869). Conclusion This study demonstrates that the DL model based on ultrasound dynamic videos performs better than the ultrasound radiologists in distinguishing thyroid nodules.
Collapse
Affiliation(s)
- Chen Ni
- The Second Clinical School of Zhejiang Chinese Medical University, Hangzhou, China
| | - Bojian Feng
- Key Laboratory of Head and Neck Cancer Translational Research of Zhejiang Province, Hangzhou, China
| | - Jincao Yao
- Department of Ultrasonography, The Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Institute of Basic Medicine and Cancer, Chinese Academy of Sciences, Hangzhou, Zhejiang, China
| | - Xueqin Zhou
- Clinical Research Department, Esaote (Shenzhen) Medical Equipment Co., Ltd., Xinyilingyu Research Center, Shenzhen, China
| | - Jiafei Shen
- Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Hangzhou, China; Key Laboratory of Head and Neck Cancer Translational Research of Zhejiang Province, Hangzhou, China
| | - Di Ou
- Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Hangzhou, China; Key Laboratory of Head and Neck Cancer Translational Research of Zhejiang Province, Hangzhou, China
| | - Chanjuan Peng
- Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Hangzhou, China; Key Laboratory of Head and Neck Cancer Translational Research of Zhejiang Province, Hangzhou, China
| | - Dong Xu
- Key Laboratory of Head and Neck Cancer Translational Research of Zhejiang Province, Hangzhou, China,Department of Ultrasonography, The Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Institute of Basic Medicine and Cancer, Chinese Academy of Sciences, Hangzhou, Zhejiang, China,*Correspondence: Dong Xu,
| |
Collapse
|
41
|
Taye M, Morrow D, Cull J, Smith DH, Hagan M. Deep Learning for FAST Quality Assessment. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2023; 42:71-79. [PMID: 35770928 DOI: 10.1002/jum.16045] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Revised: 04/30/2022] [Accepted: 06/04/2022] [Indexed: 06/15/2023]
Abstract
OBJECTIVES To determine the feasibility of using a deep learning (DL) algorithm to assess the quality of focused assessment with sonography in trauma (FAST) exams. METHODS Our dataset consists of 441 FAST exams, classified as good-quality or poor-quality, with 3161 videos. We first used convolutional neural networks (CNNs), pretrained on the Imagenet dataset and fine-tuned on the FAST dataset. Second, we trained a CNN autoencoder to compress FAST images, with a 20-1 compression ratio. The compressed codes were input to a two-layer classifier network. To train the networks, each video was labeled with the quality of the exam, and the frames were labeled with the quality of the video. For inference, a video was classified as poor-quality if half the frames were classified as poor-quality by the network, and an exam was classified as poor-quality if half the videos were classified as poor-quality. RESULTS The results with the encoder-classifier networks were much better than the transfer learning results with CNNs. This was primarily because the Imagenet dataset is not a good match for the ultrasound quality assessment problem. The DL models produced video sensitivities and specificities of 99% and 98% on held-out test sets. CONCLUSIONS Using an autoencoder to compress FAST images is a very effective way to obtain features that can be used to predict exam quality. These features are more suitable than those obtained from CNNs pretrained on Imagenet.
Collapse
Affiliation(s)
- Mesfin Taye
- School of Electrical and Computer Engineering, Oklahoma State University, Stillwater, OK, USA
- IBM, IBM Cloud, Armonk, New York, USA
| | - Dustin Morrow
- Prisma Health, Department of Emergency Medicine, Division Chief of Emergency Ultrasound, University of South Carolina, School of Medicine Greenville, Greenville, SC, USA
| | - John Cull
- Prisma Health, University of South Carolina School of Medicine-Greenville, Greenville, SC, USA
| | - Dane Hudson Smith
- Holcombe Department of Electrical Engineering, Watt Family Innovation Center, Clemson University, Clemson, SC, USA
| | - Martin Hagan
- Oklahoma State University, School of Electrical and Computer Engineering, Stillwater, OK, USA
| |
Collapse
|
42
|
Fiorentino MC, Villani FP, Di Cosmo M, Frontoni E, Moccia S. A review on deep-learning algorithms for fetal ultrasound-image analysis. Med Image Anal 2023; 83:102629. [PMID: 36308861 DOI: 10.1016/j.media.2022.102629] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 07/12/2022] [Accepted: 09/10/2022] [Indexed: 11/07/2022]
Abstract
Deep-learning (DL) algorithms are becoming the standard for processing ultrasound (US) fetal images. A number of survey papers in the field is today available, but most of them are focusing on a broader area of medical-image analysis or not covering all fetal US DL applications. This paper surveys the most recent work in the field, with a total of 153 research papers published after 2017. Papers are analyzed and commented from both the methodology and the application perspective. We categorized the papers into (i) fetal standard-plane detection, (ii) anatomical structure analysis and (iii) biometry parameter estimation. For each category, main limitations and open issues are presented. Summary tables are included to facilitate the comparison among the different approaches. In addition, emerging applications are also outlined. Publicly-available datasets and performance metrics commonly used to assess algorithm performance are summarized, too. This paper ends with a critical summary of the current state of the art on DL algorithms for fetal US image analysis and a discussion on current challenges that have to be tackled by researchers working in the field to translate the research methodology into actual clinical practice.
Collapse
Affiliation(s)
| | | | - Mariachiara Di Cosmo
- Department of Information Engineering, Università Politecnica delle Marche, Italy
| | - Emanuele Frontoni
- Department of Information Engineering, Università Politecnica delle Marche, Italy; Department of Political Sciences, Communication and International Relations, Università degli Studi di Macerata, Italy
| | - Sara Moccia
- The BioRobotics Institute and Department of Excellence in Robotics & AI, Scuola Superiore Sant'Anna, Italy
| |
Collapse
|
43
|
Deep learning-based real time detection for cardiac objects with fetal ultrasound video. INFORMATICS IN MEDICINE UNLOCKED 2023. [DOI: 10.1016/j.imu.2022.101150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
|
44
|
Artificial Intelligence (AI) in Breast Imaging: A Scientometric Umbrella Review. Diagnostics (Basel) 2022; 12:diagnostics12123111. [PMID: 36553119 PMCID: PMC9777253 DOI: 10.3390/diagnostics12123111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 12/07/2022] [Accepted: 12/08/2022] [Indexed: 12/14/2022] Open
Abstract
Artificial intelligence (AI), a rousing advancement disrupting a wide spectrum of applications with remarkable betterment, has continued to gain momentum over the past decades. Within breast imaging, AI, especially machine learning and deep learning, honed with unlimited cross-data/case referencing, has found great utility encompassing four facets: screening and detection, diagnosis, disease monitoring, and data management as a whole. Over the years, breast cancer has been the apex of the cancer cumulative risk ranking for women across the six continents, existing in variegated forms and offering a complicated context in medical decisions. Realizing the ever-increasing demand for quality healthcare, contemporary AI has been envisioned to make great strides in clinical data management and perception, with the capability to detect indeterminate significance, predict prognostication, and correlate available data into a meaningful clinical endpoint. Here, the authors captured the review works over the past decades, focusing on AI in breast imaging, and systematized the included works into one usable document, which is termed an umbrella review. The present study aims to provide a panoramic view of how AI is poised to enhance breast imaging procedures. Evidence-based scientometric analysis was performed in accordance with the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guideline, resulting in 71 included review works. This study aims to synthesize, collate, and correlate the included review works, thereby identifying the patterns, trends, quality, and types of the included works, captured by the structured search strategy. The present study is intended to serve as a "one-stop center" synthesis and provide a holistic bird's eye view to readers, ranging from newcomers to existing researchers and relevant stakeholders, on the topic of interest.
Collapse
|
45
|
Shao J, Zhou K, Cai YH, Geng DY. Application of an Improved U2-Net Model in Ultrasound Median Neural Image Segmentation. ULTRASOUND IN MEDICINE & BIOLOGY 2022; 48:2512-2520. [PMID: 36167742 DOI: 10.1016/j.ultrasmedbio.2022.08.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/17/2022] [Revised: 08/02/2022] [Accepted: 08/03/2022] [Indexed: 06/16/2023]
Abstract
To investigate whether an improved U2-Net model could be used to segment the median nerve and improve segmentation performance, we performed a retrospective study with 402 nerve images from patients who visited Huashan Hospital from October 2018 to July 2020; 249 images were from patients with carpal tunnel syndrome, and 153 were from healthy volunteers. From these, 320 cases were selected as training sets, and 82 cases were selected as test sets. The improved U2-Net model was used to segment each image. Dice coefficients (Dice), pixel accuracy (PA), mean intersection over union (MIoU) and average Hausdorff distance (AVD) were used to evaluate segmentation performance. Results revealed that the Dice, MIoU, PA and AVD values of our improved U2-Net were 72.85%, 79.66%, 95.92% and 51.37 mm, respectively, which were comparable to the actual ground truth; the ground truth came from the labeling of clinicians. However, the Dice, MIoU, PA and AVD values of U-Net were 43.19%, 65.57%, 86.22% and 74.82 mm, and those of Res-U-Net were 58.65%, 72.53%, 88.98% and 57.30 mm. Overall, our data suggest our improved U2-Net model might be used for segmentation of ultrasound median neural images.
Collapse
Affiliation(s)
- Jie Shao
- Department of Ultrasound, Huashan Hospital, Fudan University, Shanghai, China
| | - Kun Zhou
- Academy for Engineering and Technology, Fudan University, Shanghai, China
| | - Ye-Hua Cai
- Department of Ultrasound, Huashan Hospital, Fudan University, Shanghai, China
| | - Dao-Ying Geng
- Department of Radiology, Huashan Hospital, Fudan University, Shanghai, China; Greater Bay Area Institute of Precision Medicine (Guangzhou), Fudan University, Guangzhou, China.
| |
Collapse
|
46
|
Compagnone C, Borrini G, Calabrese A, Taddei M, Bellini V, Bignami E. Artificial intelligence enhanced ultrasound (AI-US) in a severe obese parturient: a case report. Ultrasound J 2022; 14:34. [PMID: 35920947 PMCID: PMC9349326 DOI: 10.1186/s13089-022-00283-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Accepted: 07/20/2022] [Indexed: 12/07/2022] Open
Abstract
Background Neuraxial anesthesia in obese parturients can be challenging due to anatomical and physiological modifications secondary to pregnancy; this led to growing popularity of spine ultrasound in this population for easing landmark identification and procedure execution. Integration of Artificial Intelligence with ultrasound (AI-US) for image enhancement and analysis has increased clinicians' ability to localize vertebral structures in patients with challenging anatomical conformation. Case presentation We present the case of a parturient with extremely severe obesity, with a Body Mass Index (BMI) = 64.5 kg/m2, in which the AI-Enabled Image Recognition allowed a successful placing of an epidural catheter. Conclusions Benefits gained from AI-US implementation are multiple: immediate recognition of anatomical structures leads to increased first-attempt success rate, making easier the process of spinal anesthesia execution compared to traditional palpation methods, reducing needle placement time for spinal anesthesia and predicting best needle direction and target structure depth in peridural anesthesia.
Collapse
|
47
|
Edwards C, Chamunyonga C, Searle B, Reddan T. The application of artificial intelligence in the sonography profession: Professional and educational considerations. ULTRASOUND (LEEDS, ENGLAND) 2022; 30:273-282. [PMID: 36969531 PMCID: PMC10034654 DOI: 10.1177/1742271x211072473] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/18/2021] [Accepted: 12/16/2021] [Indexed: 12/22/2022]
Abstract
The integration of artificial intelligence (AI) technology within the health industry is increasing. This educational piece discusses the implementation of AI and its impact on sonography. The authors investigate how AI may influence the profession and provide examples of how ultrasound imaging may be enhanced and innovated by integrating AI technology. This article highlights challenges related to the application of AI and provides insight into how they could be addressed. The critical distinction between the role of a sonographer and the reporting specialist in the context of AI is highlighted as a key issue for those developing, researching, and evaluating AI systems. A key recommendation is for the sonography community to address ultrasound education, particularly how AI knowledge could be incorporated into university education. This is an important consideration that should be extended to practising professionals as they may be involved in evaluating the efficiency and methodologies used in new research that may incorporate AI technologies.
Collapse
Affiliation(s)
- Christopher Edwards
- School of Clinical Sciences,
Faculty of Health, Queensland University of Technology, Brisbane, QLD,
Australia
- Centre for Biomedical
Technologies, Queensland University of Technology, Brisbane, QLD,
Australia
| | - Crispen Chamunyonga
- School of Clinical Sciences,
Faculty of Health, Queensland University of Technology, Brisbane, QLD,
Australia
- Department of Medical Imaging,
Redcliffe Hospital, Redcliffe, QLD, Australia
- Centre for Biomedical
Technologies, Queensland University of Technology, Brisbane, QLD,
Australia
| | - Benjamin Searle
- School of Clinical Sciences,
Faculty of Health, Queensland University of Technology, Brisbane, QLD,
Australia
- Department of Medical Imaging,
Redcliffe Hospital, Redcliffe, QLD, Australia
| | - Tristan Reddan
- School of Clinical Sciences,
Faculty of Health, Queensland University of Technology, Brisbane, QLD,
Australia
- Medical Imaging and Nuclear
Medicine, Queensland Children’s Hospital, South Brisbane, QLD,
Australia
| |
Collapse
|
48
|
Zhang H, Huo F. Prediction of early recurrence of HCC after hepatectomy by contrast-enhanced ultrasound-based deep learning radiomics. Front Oncol 2022; 12:930458. [PMID: 36248986 PMCID: PMC9554932 DOI: 10.3389/fonc.2022.930458] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Accepted: 09/07/2022] [Indexed: 12/07/2022] Open
Abstract
Objective This study aims to evaluate the predictive model based on deep learning (DL) and radiomics features from contrast-enhanced ultrasound (CEUS) to predict early recurrence (ER) in patients with hepatocellular carcinoma (HCC). Methods One hundred seventy-two patients with HCC who underwent hepatectomy and followed up for at least 1 year were included in this retrospective study. The data were divided according to the 7:3 ratios of training and test data. The ResNet-50 architecture, CEUS-based radiomics, and the combined model were used to predict the early recurrence of HCC after hepatectomy. The receiver operating characteristic (ROC) curve and calibration curve were drawn to evaluate its diagnostic efficiency. Results The CEUS-based radiomics ROCs of the “training set” and “test set” were 0.774 and 0.763, respectively. The DL model showed increased prognostic value, the ROCs of the “training set” and “test set” were 0.885 and 0.834, respectively. The combined model ROCs of the “training set” and “test set” were 0.943 and 0.882, respectively. Conclusion The deep learning radiomics model integrating DL and radiomics features from CEUS was used to predict ER and achieve satisfactory performance. Its diagnostic efficiency is significantly better than that of the single model.
Collapse
Affiliation(s)
- Hui Zhang
- Department of Ultrasound, Nanchong Central Hospital, The Second Clinical Medical College, North Sichuan Medical College (University), Nanchong, Sichuan, China
| | - Fanding Huo
- Department of Medical Ultrasound, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu, China
- Chinese Academy of Sciences Sichuan Translational Medicine Research Hospital, Chengdu, China
- *Correspondence: Fanding Huo,
| |
Collapse
|
49
|
Byra M, Dobruch-Sobczak K, Piotrzkowska-Wroblewska H, Klimonda Z, Litniewski J. Prediction of response to neoadjuvant chemotherapy in breast cancer with recurrent neural networks and raw ultrasound signals. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac8c82] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Accepted: 08/24/2022] [Indexed: 12/07/2022]
Abstract
Abstract
Objective. Prediction of the response to neoadjuvant chemotherapy (NAC) in breast cancer is important for patient outcomes. In this work, we propose a deep learning based approach to NAC response prediction in ultrasound (US) imaging. Approach. We develop recurrent neural networks that can process serial US imaging data to predict chemotherapy outcomes. We present models that can process either raw radio-frequency (RF) US data or regular US images. The proposed approach is evaluated based on 204 sequences of US data from 51 breast cancers. Each sequence included US data collected before the chemotherapy and after each subsequent dose, up to the 4th course. We investigate three pre-trained convolutional neural networks (CNNs) as back-bone feature extractors for the recurrent network. The CNNs were pre-trained using raw US RF data, US b-mode images and RGB images from the ImageNet dataset. The first two networks were developed using US data collected from malignant and benign breast masses. Main results. For the pre-treatment data, the better performing network, with back-bone CNN pre-trained on US images, achieved area under the receiver operating curve (AUC) of 0.81 (±0.04). Performance of the recurrent networks improved with each course of the chemotherapy. For the 4th course, the better performing model, based on the CNN pre-trained with RGB images, achieved AUC value of 0.93 (±0.03). Statistical analysis based on the DeLong test presented that there were no significant differences in AUC values between the pre-trained networks at each stage of the chemotherapy (p-values > 0.05). Significance. Our study demonstrates the feasibility of using recurrent neural networks for the NAC response prediction in breast cancer US.
Collapse
|
50
|
Kornblith AE, Addo N, Dong R, Rogers R, Grupp-Phelan J, Butte A, Gupta P, Callcut RA, Arnaout R. Development and Validation of a Deep Learning Strategy for Automated View Classification of Pediatric Focused Assessment With Sonography for Trauma. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2022; 41:1915-1924. [PMID: 34741469 PMCID: PMC9072593 DOI: 10.1002/jum.15868] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Revised: 10/14/2021] [Accepted: 10/19/2021] [Indexed: 06/09/2023]
Abstract
OBJECTIVE Pediatric focused assessment with sonography for trauma (FAST) is a sequence of ultrasound views rapidly performed by clinicians to diagnose hemorrhage. A technical limitation of FAST is the lack of expertise to consistently acquire all required views. We sought to develop an accurate deep learning view classifier using a large heterogeneous dataset of clinician-performed pediatric FAST. METHODS We developed and conducted a retrospective cohort analysis of a deep learning view classifier on real-world FAST studies performed on injured children less than 18 years old in two pediatric emergency departments by 30 different clinicians. FAST was randomly distributed to training, validation, and test datasets, 70:20:10; each child was represented in only one dataset. The primary outcome was view classifier accuracy for video clips and still frames. RESULTS There were 699 FAST studies, representing 4925 video clips and 1,062,612 still frames, performed by 30 different clinicians. The overall classification accuracy was 97.8% (95% confidence interval [CI]: 96.0-99.0) for video clips and 93.4% (95% CI: 93.3-93.6) for still frames. Per view still frames were classified with an accuracy: 96.0% (95% CI: 95.9-96.1) cardiac, 99.8% (95% CI: 99.8-99.8) pleural, 95.2% (95% CI: 95.0-95.3) abdominal upper quadrants, and 95.9% (95% CI: 95.8-96.0) suprapubic. CONCLUSION A deep learning classifier can accurately predict pediatric FAST views. Accurate view classification is important for quality assurance and feasibility of a multi-stage deep learning FAST model to enhance the evaluation of injured children.
Collapse
Affiliation(s)
- Aaron E Kornblith
- Department of Emergency Medicine, University of California, San Francisco, CA, USA
- Department of Pediatrics, University of California, San Francisco, CA, USA
- Bakar Computational Health Sciences Institute, University of California, San Francisco, CA, USA
| | - Newton Addo
- Department of Emergency Medicine, University of California, San Francisco, CA, USA
- Department of Medicine, Division of Cardiology, University of California, San Francisco, CA, USA
| | - Ruolei Dong
- Department of Bioengineering, University of California, Berkeley, CA, USA
- Department of Bioengineering and Therapeutic Sciences, University of California, San Francisco, CA, USA
| | - Robert Rogers
- Center for Digital Health Innovation, University of California, San Francisco, CA, USA
| | - Jacqueline Grupp-Phelan
- Department of Emergency Medicine, University of California, San Francisco, CA, USA
- Department of Pediatrics, University of California, San Francisco, CA, USA
| | - Atul Butte
- Department of Pediatrics, University of California, San Francisco, CA, USA
- Bakar Computational Health Sciences Institute, University of California, San Francisco, CA, USA
| | - Pavan Gupta
- Center for Digital Health Innovation, University of California, San Francisco, CA, USA
| | - Rachael A Callcut
- Center for Digital Health Innovation, University of California, San Francisco, CA, USA
- Department of Surgery, University of California, Davis, CA, USA
| | - Rima Arnaout
- Bakar Computational Health Sciences Institute, University of California, San Francisco, CA, USA
- Department of Medicine, Division of Cardiology, University of California, San Francisco, CA, USA
| |
Collapse
|